00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1820 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3086 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.080 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.108 Using shallow fetch with depth 1 00:00:00.108 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.108 > git --version # timeout=10 00:00:00.140 > git --version # 'git version 2.39.2' 00:00:00.140 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.141 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.141 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.056 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.068 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.079 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.079 > git config core.sparsecheckout # timeout=10 00:00:04.090 > git read-tree -mu HEAD # timeout=10 00:00:04.105 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.122 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.122 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:04.239 [Pipeline] Start of Pipeline 00:00:04.276 [Pipeline] library 00:00:04.278 Loading library shm_lib@master 00:00:04.278 Library shm_lib@master is cached. Copying from home. 00:00:04.294 [Pipeline] node 00:00:04.300 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.301 [Pipeline] { 00:00:04.308 [Pipeline] catchError 00:00:04.309 [Pipeline] { 00:00:04.318 [Pipeline] wrap 00:00:04.325 [Pipeline] { 00:00:04.330 [Pipeline] stage 00:00:04.331 [Pipeline] { (Prologue) 00:00:04.501 [Pipeline] sh 00:00:04.784 + logger -p user.info -t JENKINS-CI 00:00:04.805 [Pipeline] echo 00:00:04.806 Node: GP6 00:00:04.814 [Pipeline] sh 00:00:05.108 [Pipeline] setCustomBuildProperty 00:00:05.119 [Pipeline] echo 00:00:05.121 Cleanup processes 00:00:05.126 [Pipeline] sh 00:00:05.406 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.406 3801979 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.420 [Pipeline] sh 00:00:05.706 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.706 ++ grep -v 'sudo pgrep' 00:00:05.706 ++ awk '{print $1}' 00:00:05.706 + sudo kill -9 00:00:05.706 + true 00:00:05.722 [Pipeline] cleanWs 00:00:05.732 [WS-CLEANUP] Deleting project workspace... 00:00:05.732 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.739 [WS-CLEANUP] done 00:00:05.744 [Pipeline] setCustomBuildProperty 00:00:05.759 [Pipeline] sh 00:00:06.039 + sudo git config --global --replace-all safe.directory '*' 00:00:06.107 [Pipeline] nodesByLabel 00:00:06.108 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.117 [Pipeline] httpRequest 00:00:06.121 HttpMethod: GET 00:00:06.122 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.125 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.141 Response Code: HTTP/1.1 200 OK 00:00:06.142 Success: Status code 200 is in the accepted range: 200,404 00:00:06.142 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:16.353 [Pipeline] sh 00:00:16.657 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:16.675 [Pipeline] httpRequest 00:00:16.679 HttpMethod: GET 00:00:16.680 URL: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:16.681 Sending request to url: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:16.683 Response Code: HTTP/1.1 200 OK 00:00:16.683 Success: Status code 200 is in the accepted range: 200,404 00:00:16.684 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:26.120 [Pipeline] sh 00:00:26.401 + tar --no-same-owner -xf spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:29.707 [Pipeline] sh 00:00:29.990 + git -C spdk log --oneline -n5 00:00:29.990 4506c0c36 test/common: Enable inherit_errexit 00:00:29.990 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:29.990 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:29.990 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:29.990 b22f1b34d test/scheduler: Enhance lookup of the $old_cgroup in move_proc() 00:00:30.009 [Pipeline] withCredentials 00:00:30.019 > git --version # timeout=10 00:00:30.031 > git --version # 'git version 2.39.2' 00:00:30.047 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.048 [Pipeline] { 00:00:30.057 [Pipeline] retry 00:00:30.059 [Pipeline] { 00:00:30.077 [Pipeline] sh 00:00:30.389 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:30.401 [Pipeline] } 00:00:30.422 [Pipeline] // retry 00:00:30.428 [Pipeline] } 00:00:30.450 [Pipeline] // withCredentials 00:00:30.463 [Pipeline] httpRequest 00:00:30.467 HttpMethod: GET 00:00:30.468 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:30.472 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:30.475 Response Code: HTTP/1.1 200 OK 00:00:30.476 Success: Status code 200 is in the accepted range: 200,404 00:00:30.476 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:35.857 [Pipeline] sh 00:00:36.138 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:38.056 [Pipeline] sh 00:00:38.334 + git -C dpdk log --oneline -n5 00:00:38.334 caf0f5d395 version: 22.11.4 00:00:38.334 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:38.334 dc9c799c7d vhost: fix missing spinlock unlock 00:00:38.334 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:38.334 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:38.344 [Pipeline] } 00:00:38.363 [Pipeline] // stage 00:00:38.371 [Pipeline] stage 00:00:38.373 [Pipeline] { (Prepare) 00:00:38.395 [Pipeline] writeFile 00:00:38.412 [Pipeline] sh 00:00:38.694 + logger -p user.info -t JENKINS-CI 00:00:38.708 [Pipeline] sh 00:00:38.989 + logger -p user.info -t JENKINS-CI 00:00:39.002 [Pipeline] sh 00:00:39.283 + cat autorun-spdk.conf 00:00:39.283 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.283 SPDK_TEST_NVMF=1 00:00:39.283 SPDK_TEST_NVME_CLI=1 00:00:39.284 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.284 SPDK_TEST_NVMF_NICS=e810 00:00:39.284 SPDK_TEST_VFIOUSER=1 00:00:39.284 SPDK_RUN_UBSAN=1 00:00:39.284 NET_TYPE=phy 00:00:39.284 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:39.284 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.291 RUN_NIGHTLY=1 00:00:39.296 [Pipeline] readFile 00:00:39.320 [Pipeline] withEnv 00:00:39.322 [Pipeline] { 00:00:39.338 [Pipeline] sh 00:00:39.621 + set -ex 00:00:39.622 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:39.622 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:39.622 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.622 ++ SPDK_TEST_NVMF=1 00:00:39.622 ++ SPDK_TEST_NVME_CLI=1 00:00:39.622 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.622 ++ SPDK_TEST_NVMF_NICS=e810 00:00:39.622 ++ SPDK_TEST_VFIOUSER=1 00:00:39.622 ++ SPDK_RUN_UBSAN=1 00:00:39.622 ++ NET_TYPE=phy 00:00:39.622 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:39.622 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.622 ++ RUN_NIGHTLY=1 00:00:39.622 + case $SPDK_TEST_NVMF_NICS in 00:00:39.622 + DRIVERS=ice 00:00:39.622 + [[ tcp == \r\d\m\a ]] 00:00:39.622 + [[ -n ice ]] 00:00:39.622 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:39.622 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:39.622 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:39.622 rmmod: ERROR: Module irdma is not currently loaded 00:00:39.622 rmmod: ERROR: Module i40iw is not currently loaded 00:00:39.622 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:39.622 + true 00:00:39.622 + for D in $DRIVERS 00:00:39.622 + sudo modprobe ice 00:00:39.622 + exit 0 00:00:39.632 [Pipeline] } 00:00:39.652 [Pipeline] // withEnv 00:00:39.660 [Pipeline] } 00:00:39.683 [Pipeline] // stage 00:00:39.703 [Pipeline] catchError 00:00:39.706 [Pipeline] { 00:00:39.727 [Pipeline] timeout 00:00:39.727 Timeout set to expire in 40 min 00:00:39.728 [Pipeline] { 00:00:39.738 [Pipeline] stage 00:00:39.739 [Pipeline] { (Tests) 00:00:39.748 [Pipeline] sh 00:00:40.021 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.021 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.021 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.021 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:40.021 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.021 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.021 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:40.021 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.021 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:40.021 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:40.021 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:40.021 + source /etc/os-release 00:00:40.021 ++ NAME='Fedora Linux' 00:00:40.021 ++ VERSION='38 (Cloud Edition)' 00:00:40.021 ++ ID=fedora 00:00:40.021 ++ VERSION_ID=38 00:00:40.021 ++ VERSION_CODENAME= 00:00:40.021 ++ PLATFORM_ID=platform:f38 00:00:40.021 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:40.021 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:40.021 ++ LOGO=fedora-logo-icon 00:00:40.021 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:40.021 ++ HOME_URL=https://fedoraproject.org/ 00:00:40.021 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:40.021 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:40.021 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:40.021 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:40.021 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:40.021 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:40.021 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:40.021 ++ SUPPORT_END=2024-05-14 00:00:40.021 ++ VARIANT='Cloud Edition' 00:00:40.021 ++ VARIANT_ID=cloud 00:00:40.021 + uname -a 00:00:40.021 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:40.021 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:41.397 Hugepages 00:00:41.397 node hugesize free / total 00:00:41.397 node0 1048576kB 0 / 0 00:00:41.397 node0 2048kB 0 / 0 00:00:41.397 node1 1048576kB 0 / 0 00:00:41.397 node1 2048kB 0 / 0 00:00:41.397 00:00:41.397 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.397 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:41.397 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:41.397 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:41.397 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:41.397 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:41.397 + rm -f /tmp/spdk-ld-path 00:00:41.397 + source autorun-spdk.conf 00:00:41.397 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.397 ++ SPDK_TEST_NVMF=1 00:00:41.397 ++ SPDK_TEST_NVME_CLI=1 00:00:41.397 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.397 ++ SPDK_TEST_NVMF_NICS=e810 00:00:41.397 ++ SPDK_TEST_VFIOUSER=1 00:00:41.397 ++ SPDK_RUN_UBSAN=1 00:00:41.397 ++ NET_TYPE=phy 00:00:41.397 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:41.397 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:41.397 ++ RUN_NIGHTLY=1 00:00:41.397 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.397 + [[ -n '' ]] 00:00:41.397 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:41.397 + for M in /var/spdk/build-*-manifest.txt 00:00:41.397 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.397 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.397 + for M in /var/spdk/build-*-manifest.txt 00:00:41.397 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.397 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:41.397 ++ uname 00:00:41.397 + [[ Linux == \L\i\n\u\x ]] 00:00:41.397 + sudo dmesg -T 00:00:41.397 + sudo dmesg --clear 00:00:41.397 + dmesg_pid=3802767 00:00:41.397 + [[ Fedora Linux == FreeBSD ]] 00:00:41.397 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.397 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.397 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:41.397 + sudo dmesg -Tw 00:00:41.397 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:41.397 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:41.397 + [[ -x /usr/src/fio-static/fio ]] 00:00:41.397 + export FIO_BIN=/usr/src/fio-static/fio 00:00:41.397 + FIO_BIN=/usr/src/fio-static/fio 00:00:41.397 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:41.397 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:41.397 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:41.397 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.397 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.397 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:41.397 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.397 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.397 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:41.397 Test configuration: 00:00:41.397 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.397 SPDK_TEST_NVMF=1 00:00:41.397 SPDK_TEST_NVME_CLI=1 00:00:41.397 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.397 SPDK_TEST_NVMF_NICS=e810 00:00:41.397 SPDK_TEST_VFIOUSER=1 00:00:41.397 SPDK_RUN_UBSAN=1 00:00:41.397 NET_TYPE=phy 00:00:41.397 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:41.397 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:41.656 RUN_NIGHTLY=1 01:29:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:41.656 01:29:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.656 01:29:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.656 01:29:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.656 01:29:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.656 01:29:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.656 01:29:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.656 01:29:05 -- paths/export.sh@5 -- $ export PATH 00:00:41.656 01:29:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.656 01:29:05 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:41.656 01:29:05 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:41.656 01:29:05 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715729345.XXXXXX 00:00:41.656 01:29:05 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715729345.nTxGuT 00:00:41.656 01:29:05 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:41.656 01:29:05 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:00:41.656 01:29:05 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:41.656 01:29:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:41.656 01:29:05 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.656 01:29:05 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:41.656 01:29:05 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:41.656 01:29:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:41.656 01:29:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:41.656 01:29:05 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:41.656 01:29:05 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:41.656 01:29:05 -- pm/common@17 -- $ local monitor 00:00:41.656 01:29:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.656 01:29:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.656 01:29:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.656 01:29:05 -- pm/common@21 -- $ date +%s 00:00:41.656 01:29:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:41.656 01:29:05 -- pm/common@21 -- $ date +%s 00:00:41.656 01:29:05 -- pm/common@25 -- $ sleep 1 00:00:41.656 01:29:05 -- pm/common@21 -- $ date +%s 00:00:41.656 01:29:05 -- pm/common@21 -- $ date +%s 00:00:41.656 01:29:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715729345 00:00:41.656 01:29:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715729345 00:00:41.656 01:29:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715729345 00:00:41.656 01:29:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715729345 00:00:41.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715729345_collect-vmstat.pm.log 00:00:41.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715729345_collect-cpu-load.pm.log 00:00:41.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715729345_collect-cpu-temp.pm.log 00:00:41.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715729345_collect-bmc-pm.bmc.pm.log 00:00:42.596 01:29:06 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:42.596 01:29:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:42.596 01:29:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:42.596 01:29:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.596 01:29:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:42.596 Tue May 14 11:29:06 PM UTC 2024 00:00:42.596 01:29:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:42.596 v24.05-pre-658-g4506c0c36 00:00:42.596 01:29:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:42.596 01:29:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:42.596 01:29:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:42.596 01:29:06 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:42.596 01:29:06 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:42.596 01:29:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.596 ************************************ 00:00:42.596 START TEST ubsan 00:00:42.596 ************************************ 00:00:42.596 01:29:06 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:42.596 using ubsan 00:00:42.596 00:00:42.596 real 0m0.000s 00:00:42.596 user 0m0.000s 00:00:42.596 sys 0m0.000s 00:00:42.596 01:29:06 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:42.596 01:29:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:42.596 ************************************ 00:00:42.596 END TEST ubsan 00:00:42.596 ************************************ 00:00:42.596 01:29:06 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:42.596 01:29:06 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:42.596 01:29:06 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:42.596 01:29:06 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:00:42.596 01:29:06 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:42.596 01:29:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.596 ************************************ 00:00:42.596 START TEST build_native_dpdk 00:00:42.596 ************************************ 00:00:42.596 01:29:06 build_native_dpdk -- common/autotest_common.sh@1122 -- $ _build_native_dpdk 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:42.596 caf0f5d395 version: 22.11.4 00:00:42.596 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:42.596 dc9c799c7d vhost: fix missing spinlock unlock 00:00:42.596 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:42.596 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:42.596 01:29:06 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:42.596 01:29:06 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:42.597 01:29:06 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:42.597 01:29:06 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:42.597 patching file config/rte_config.h 00:00:42.597 Hunk #1 succeeded at 60 (offset 1 line). 00:00:42.597 01:29:06 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:42.597 01:29:06 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:00:42.597 01:29:06 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:42.597 01:29:06 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:42.597 01:29:06 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:46.794 The Meson build system 00:00:46.794 Version: 1.3.1 00:00:46.794 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:46.794 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:46.794 Build type: native build 00:00:46.794 Program cat found: YES (/usr/bin/cat) 00:00:46.794 Project name: DPDK 00:00:46.794 Project version: 22.11.4 00:00:46.794 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:46.794 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:46.794 Host machine cpu family: x86_64 00:00:46.794 Host machine cpu: x86_64 00:00:46.794 Message: ## Building in Developer Mode ## 00:00:46.794 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:46.794 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:46.794 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:46.794 Program objdump found: YES (/usr/bin/objdump) 00:00:46.794 Program python3 found: YES (/usr/bin/python3) 00:00:46.794 Program cat found: YES (/usr/bin/cat) 00:00:46.794 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:46.794 Checking for size of "void *" : 8 00:00:46.794 Checking for size of "void *" : 8 (cached) 00:00:46.794 Library m found: YES 00:00:46.794 Library numa found: YES 00:00:46.794 Has header "numaif.h" : YES 00:00:46.794 Library fdt found: NO 00:00:46.794 Library execinfo found: NO 00:00:46.794 Has header "execinfo.h" : YES 00:00:46.794 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:46.794 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:46.794 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:46.794 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:46.794 Run-time dependency openssl found: YES 3.0.9 00:00:46.794 Run-time dependency libpcap found: YES 1.10.4 00:00:46.794 Has header "pcap.h" with dependency libpcap: YES 00:00:46.794 Compiler for C supports arguments -Wcast-qual: YES 00:00:46.794 Compiler for C supports arguments -Wdeprecated: YES 00:00:46.794 Compiler for C supports arguments -Wformat: YES 00:00:46.794 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:46.794 Compiler for C supports arguments -Wformat-security: NO 00:00:46.794 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:46.794 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:46.794 Compiler for C supports arguments -Wnested-externs: YES 00:00:46.794 Compiler for C supports arguments -Wold-style-definition: YES 00:00:46.794 Compiler for C supports arguments -Wpointer-arith: YES 00:00:46.794 Compiler for C supports arguments -Wsign-compare: YES 00:00:46.794 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:46.794 Compiler for C supports arguments -Wundef: YES 00:00:46.794 Compiler for C supports arguments -Wwrite-strings: YES 00:00:46.794 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:46.794 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:46.794 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:46.794 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:46.794 Compiler for C supports arguments -mavx512f: YES 00:00:46.794 Checking if "AVX512 checking" compiles: YES 00:00:46.794 Fetching value of define "__SSE4_2__" : 1 00:00:46.794 Fetching value of define "__AES__" : 1 00:00:46.794 Fetching value of define "__AVX__" : 1 00:00:46.794 Fetching value of define "__AVX2__" : (undefined) 00:00:46.794 Fetching value of define "__AVX512BW__" : (undefined) 00:00:46.794 Fetching value of define "__AVX512CD__" : (undefined) 00:00:46.794 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:46.794 Fetching value of define "__AVX512F__" : (undefined) 00:00:46.794 Fetching value of define "__AVX512VL__" : (undefined) 00:00:46.794 Fetching value of define "__PCLMUL__" : 1 00:00:46.794 Fetching value of define "__RDRND__" : 1 00:00:46.794 Fetching value of define "__RDSEED__" : (undefined) 00:00:46.794 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:46.794 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:46.794 Message: lib/kvargs: Defining dependency "kvargs" 00:00:46.794 Message: lib/telemetry: Defining dependency "telemetry" 00:00:46.794 Checking for function "getentropy" : YES 00:00:46.794 Message: lib/eal: Defining dependency "eal" 00:00:46.794 Message: lib/ring: Defining dependency "ring" 00:00:46.794 Message: lib/rcu: Defining dependency "rcu" 00:00:46.794 Message: lib/mempool: Defining dependency "mempool" 00:00:46.794 Message: lib/mbuf: Defining dependency "mbuf" 00:00:46.794 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:46.794 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:46.794 Compiler for C supports arguments -mpclmul: YES 00:00:46.794 Compiler for C supports arguments -maes: YES 00:00:46.794 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:46.794 Compiler for C supports arguments -mavx512bw: YES 00:00:46.794 Compiler for C supports arguments -mavx512dq: YES 00:00:46.794 Compiler for C supports arguments -mavx512vl: YES 00:00:46.794 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:46.794 Compiler for C supports arguments -mavx2: YES 00:00:46.794 Compiler for C supports arguments -mavx: YES 00:00:46.794 Message: lib/net: Defining dependency "net" 00:00:46.794 Message: lib/meter: Defining dependency "meter" 00:00:46.794 Message: lib/ethdev: Defining dependency "ethdev" 00:00:46.794 Message: lib/pci: Defining dependency "pci" 00:00:46.794 Message: lib/cmdline: Defining dependency "cmdline" 00:00:46.794 Message: lib/metrics: Defining dependency "metrics" 00:00:46.794 Message: lib/hash: Defining dependency "hash" 00:00:46.794 Message: lib/timer: Defining dependency "timer" 00:00:46.794 Fetching value of define "__AVX2__" : (undefined) (cached) 00:00:46.794 Compiler for C supports arguments -mavx2: YES (cached) 00:00:46.794 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:46.794 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:00:46.794 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:00:46.794 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:00:46.794 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:00:46.794 Message: lib/acl: Defining dependency "acl" 00:00:46.794 Message: lib/bbdev: Defining dependency "bbdev" 00:00:46.794 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:46.794 Run-time dependency libelf found: YES 0.190 00:00:46.794 Message: lib/bpf: Defining dependency "bpf" 00:00:46.794 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:46.794 Message: lib/compressdev: Defining dependency "compressdev" 00:00:46.794 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:46.794 Message: lib/distributor: Defining dependency "distributor" 00:00:46.794 Message: lib/efd: Defining dependency "efd" 00:00:46.794 Message: lib/eventdev: Defining dependency "eventdev" 00:00:46.794 Message: lib/gpudev: Defining dependency "gpudev" 00:00:46.794 Message: lib/gro: Defining dependency "gro" 00:00:46.794 Message: lib/gso: Defining dependency "gso" 00:00:46.794 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:46.794 Message: lib/jobstats: Defining dependency "jobstats" 00:00:46.794 Message: lib/latencystats: Defining dependency "latencystats" 00:00:46.794 Message: lib/lpm: Defining dependency "lpm" 00:00:46.794 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:46.794 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:46.794 Fetching value of define "__AVX512IFMA__" : (undefined) 00:00:46.794 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:00:46.794 Message: lib/member: Defining dependency "member" 00:00:46.794 Message: lib/pcapng: Defining dependency "pcapng" 00:00:46.794 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:46.794 Message: lib/power: Defining dependency "power" 00:00:46.794 Message: lib/rawdev: Defining dependency "rawdev" 00:00:46.794 Message: lib/regexdev: Defining dependency "regexdev" 00:00:46.794 Message: lib/dmadev: Defining dependency "dmadev" 00:00:46.794 Message: lib/rib: Defining dependency "rib" 00:00:46.794 Message: lib/reorder: Defining dependency "reorder" 00:00:46.794 Message: lib/sched: Defining dependency "sched" 00:00:46.794 Message: lib/security: Defining dependency "security" 00:00:46.794 Message: lib/stack: Defining dependency "stack" 00:00:46.794 Has header "linux/userfaultfd.h" : YES 00:00:46.794 Message: lib/vhost: Defining dependency "vhost" 00:00:46.794 Message: lib/ipsec: Defining dependency "ipsec" 00:00:46.794 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:46.794 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:00:46.794 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:00:46.794 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:46.794 Message: lib/fib: Defining dependency "fib" 00:00:46.794 Message: lib/port: Defining dependency "port" 00:00:46.794 Message: lib/pdump: Defining dependency "pdump" 00:00:46.794 Message: lib/table: Defining dependency "table" 00:00:46.794 Message: lib/pipeline: Defining dependency "pipeline" 00:00:46.794 Message: lib/graph: Defining dependency "graph" 00:00:46.794 Message: lib/node: Defining dependency "node" 00:00:46.794 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:46.794 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:46.794 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:46.794 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:46.794 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:46.794 Compiler for C supports arguments -Wno-unused-value: YES 00:00:47.729 Compiler for C supports arguments -Wno-format: YES 00:00:47.729 Compiler for C supports arguments -Wno-format-security: YES 00:00:47.729 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:47.729 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:47.729 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:47.729 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:47.729 Fetching value of define "__AVX2__" : (undefined) (cached) 00:00:47.729 Compiler for C supports arguments -mavx2: YES (cached) 00:00:47.729 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:47.729 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:47.729 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:47.729 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:47.729 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:47.729 Program doxygen found: YES (/usr/bin/doxygen) 00:00:47.729 Configuring doxy-api.conf using configuration 00:00:47.729 Program sphinx-build found: NO 00:00:47.729 Configuring rte_build_config.h using configuration 00:00:47.729 Message: 00:00:47.729 ================= 00:00:47.729 Applications Enabled 00:00:47.729 ================= 00:00:47.729 00:00:47.729 apps: 00:00:47.729 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:00:47.729 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:00:47.729 test-security-perf, 00:00:47.729 00:00:47.729 Message: 00:00:47.729 ================= 00:00:47.729 Libraries Enabled 00:00:47.729 ================= 00:00:47.729 00:00:47.729 libs: 00:00:47.730 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:00:47.730 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:00:47.730 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:00:47.730 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:00:47.730 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:00:47.730 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:00:47.730 table, pipeline, graph, node, 00:00:47.730 00:00:47.730 Message: 00:00:47.730 =============== 00:00:47.730 Drivers Enabled 00:00:47.730 =============== 00:00:47.730 00:00:47.730 common: 00:00:47.730 00:00:47.730 bus: 00:00:47.730 pci, vdev, 00:00:47.730 mempool: 00:00:47.730 ring, 00:00:47.730 dma: 00:00:47.730 00:00:47.730 net: 00:00:47.730 i40e, 00:00:47.730 raw: 00:00:47.730 00:00:47.730 crypto: 00:00:47.730 00:00:47.730 compress: 00:00:47.730 00:00:47.730 regex: 00:00:47.730 00:00:47.730 vdpa: 00:00:47.730 00:00:47.730 event: 00:00:47.730 00:00:47.730 baseband: 00:00:47.730 00:00:47.730 gpu: 00:00:47.730 00:00:47.730 00:00:47.730 Message: 00:00:47.730 ================= 00:00:47.730 Content Skipped 00:00:47.730 ================= 00:00:47.730 00:00:47.730 apps: 00:00:47.730 00:00:47.730 libs: 00:00:47.730 kni: explicitly disabled via build config (deprecated lib) 00:00:47.730 flow_classify: explicitly disabled via build config (deprecated lib) 00:00:47.730 00:00:47.730 drivers: 00:00:47.730 common/cpt: not in enabled drivers build config 00:00:47.730 common/dpaax: not in enabled drivers build config 00:00:47.730 common/iavf: not in enabled drivers build config 00:00:47.730 common/idpf: not in enabled drivers build config 00:00:47.730 common/mvep: not in enabled drivers build config 00:00:47.730 common/octeontx: not in enabled drivers build config 00:00:47.730 bus/auxiliary: not in enabled drivers build config 00:00:47.730 bus/dpaa: not in enabled drivers build config 00:00:47.730 bus/fslmc: not in enabled drivers build config 00:00:47.730 bus/ifpga: not in enabled drivers build config 00:00:47.730 bus/vmbus: not in enabled drivers build config 00:00:47.730 common/cnxk: not in enabled drivers build config 00:00:47.730 common/mlx5: not in enabled drivers build config 00:00:47.730 common/qat: not in enabled drivers build config 00:00:47.730 common/sfc_efx: not in enabled drivers build config 00:00:47.730 mempool/bucket: not in enabled drivers build config 00:00:47.730 mempool/cnxk: not in enabled drivers build config 00:00:47.730 mempool/dpaa: not in enabled drivers build config 00:00:47.730 mempool/dpaa2: not in enabled drivers build config 00:00:47.730 mempool/octeontx: not in enabled drivers build config 00:00:47.730 mempool/stack: not in enabled drivers build config 00:00:47.730 dma/cnxk: not in enabled drivers build config 00:00:47.730 dma/dpaa: not in enabled drivers build config 00:00:47.730 dma/dpaa2: not in enabled drivers build config 00:00:47.730 dma/hisilicon: not in enabled drivers build config 00:00:47.730 dma/idxd: not in enabled drivers build config 00:00:47.730 dma/ioat: not in enabled drivers build config 00:00:47.730 dma/skeleton: not in enabled drivers build config 00:00:47.730 net/af_packet: not in enabled drivers build config 00:00:47.730 net/af_xdp: not in enabled drivers build config 00:00:47.730 net/ark: not in enabled drivers build config 00:00:47.730 net/atlantic: not in enabled drivers build config 00:00:47.730 net/avp: not in enabled drivers build config 00:00:47.730 net/axgbe: not in enabled drivers build config 00:00:47.730 net/bnx2x: not in enabled drivers build config 00:00:47.730 net/bnxt: not in enabled drivers build config 00:00:47.730 net/bonding: not in enabled drivers build config 00:00:47.730 net/cnxk: not in enabled drivers build config 00:00:47.730 net/cxgbe: not in enabled drivers build config 00:00:47.730 net/dpaa: not in enabled drivers build config 00:00:47.730 net/dpaa2: not in enabled drivers build config 00:00:47.730 net/e1000: not in enabled drivers build config 00:00:47.730 net/ena: not in enabled drivers build config 00:00:47.730 net/enetc: not in enabled drivers build config 00:00:47.730 net/enetfec: not in enabled drivers build config 00:00:47.730 net/enic: not in enabled drivers build config 00:00:47.730 net/failsafe: not in enabled drivers build config 00:00:47.730 net/fm10k: not in enabled drivers build config 00:00:47.730 net/gve: not in enabled drivers build config 00:00:47.730 net/hinic: not in enabled drivers build config 00:00:47.730 net/hns3: not in enabled drivers build config 00:00:47.730 net/iavf: not in enabled drivers build config 00:00:47.730 net/ice: not in enabled drivers build config 00:00:47.730 net/idpf: not in enabled drivers build config 00:00:47.730 net/igc: not in enabled drivers build config 00:00:47.730 net/ionic: not in enabled drivers build config 00:00:47.730 net/ipn3ke: not in enabled drivers build config 00:00:47.730 net/ixgbe: not in enabled drivers build config 00:00:47.730 net/kni: not in enabled drivers build config 00:00:47.730 net/liquidio: not in enabled drivers build config 00:00:47.730 net/mana: not in enabled drivers build config 00:00:47.730 net/memif: not in enabled drivers build config 00:00:47.730 net/mlx4: not in enabled drivers build config 00:00:47.730 net/mlx5: not in enabled drivers build config 00:00:47.730 net/mvneta: not in enabled drivers build config 00:00:47.730 net/mvpp2: not in enabled drivers build config 00:00:47.730 net/netvsc: not in enabled drivers build config 00:00:47.730 net/nfb: not in enabled drivers build config 00:00:47.730 net/nfp: not in enabled drivers build config 00:00:47.730 net/ngbe: not in enabled drivers build config 00:00:47.730 net/null: not in enabled drivers build config 00:00:47.730 net/octeontx: not in enabled drivers build config 00:00:47.730 net/octeon_ep: not in enabled drivers build config 00:00:47.730 net/pcap: not in enabled drivers build config 00:00:47.730 net/pfe: not in enabled drivers build config 00:00:47.730 net/qede: not in enabled drivers build config 00:00:47.730 net/ring: not in enabled drivers build config 00:00:47.730 net/sfc: not in enabled drivers build config 00:00:47.730 net/softnic: not in enabled drivers build config 00:00:47.730 net/tap: not in enabled drivers build config 00:00:47.730 net/thunderx: not in enabled drivers build config 00:00:47.730 net/txgbe: not in enabled drivers build config 00:00:47.730 net/vdev_netvsc: not in enabled drivers build config 00:00:47.730 net/vhost: not in enabled drivers build config 00:00:47.730 net/virtio: not in enabled drivers build config 00:00:47.730 net/vmxnet3: not in enabled drivers build config 00:00:47.730 raw/cnxk_bphy: not in enabled drivers build config 00:00:47.730 raw/cnxk_gpio: not in enabled drivers build config 00:00:47.730 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:47.730 raw/ifpga: not in enabled drivers build config 00:00:47.730 raw/ntb: not in enabled drivers build config 00:00:47.730 raw/skeleton: not in enabled drivers build config 00:00:47.730 crypto/armv8: not in enabled drivers build config 00:00:47.730 crypto/bcmfs: not in enabled drivers build config 00:00:47.730 crypto/caam_jr: not in enabled drivers build config 00:00:47.730 crypto/ccp: not in enabled drivers build config 00:00:47.730 crypto/cnxk: not in enabled drivers build config 00:00:47.730 crypto/dpaa_sec: not in enabled drivers build config 00:00:47.730 crypto/dpaa2_sec: not in enabled drivers build config 00:00:47.730 crypto/ipsec_mb: not in enabled drivers build config 00:00:47.730 crypto/mlx5: not in enabled drivers build config 00:00:47.730 crypto/mvsam: not in enabled drivers build config 00:00:47.730 crypto/nitrox: not in enabled drivers build config 00:00:47.730 crypto/null: not in enabled drivers build config 00:00:47.730 crypto/octeontx: not in enabled drivers build config 00:00:47.730 crypto/openssl: not in enabled drivers build config 00:00:47.730 crypto/scheduler: not in enabled drivers build config 00:00:47.730 crypto/uadk: not in enabled drivers build config 00:00:47.730 crypto/virtio: not in enabled drivers build config 00:00:47.730 compress/isal: not in enabled drivers build config 00:00:47.730 compress/mlx5: not in enabled drivers build config 00:00:47.730 compress/octeontx: not in enabled drivers build config 00:00:47.730 compress/zlib: not in enabled drivers build config 00:00:47.730 regex/mlx5: not in enabled drivers build config 00:00:47.730 regex/cn9k: not in enabled drivers build config 00:00:47.730 vdpa/ifc: not in enabled drivers build config 00:00:47.730 vdpa/mlx5: not in enabled drivers build config 00:00:47.730 vdpa/sfc: not in enabled drivers build config 00:00:47.730 event/cnxk: not in enabled drivers build config 00:00:47.730 event/dlb2: not in enabled drivers build config 00:00:47.730 event/dpaa: not in enabled drivers build config 00:00:47.730 event/dpaa2: not in enabled drivers build config 00:00:47.730 event/dsw: not in enabled drivers build config 00:00:47.730 event/opdl: not in enabled drivers build config 00:00:47.730 event/skeleton: not in enabled drivers build config 00:00:47.730 event/sw: not in enabled drivers build config 00:00:47.730 event/octeontx: not in enabled drivers build config 00:00:47.730 baseband/acc: not in enabled drivers build config 00:00:47.730 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:47.730 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:47.730 baseband/la12xx: not in enabled drivers build config 00:00:47.730 baseband/null: not in enabled drivers build config 00:00:47.730 baseband/turbo_sw: not in enabled drivers build config 00:00:47.730 gpu/cuda: not in enabled drivers build config 00:00:47.730 00:00:47.730 00:00:47.730 Build targets in project: 316 00:00:47.730 00:00:47.730 DPDK 22.11.4 00:00:47.730 00:00:47.730 User defined options 00:00:47.730 libdir : lib 00:00:47.730 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.730 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:47.730 c_link_args : 00:00:47.730 enable_docs : false 00:00:47.730 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:47.730 enable_kmods : false 00:00:47.730 machine : native 00:00:47.730 tests : false 00:00:47.730 00:00:47.730 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:47.730 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:47.730 01:29:11 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:00:47.992 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:47.992 [1/745] Generating lib/rte_telemetry_def with a custom command 00:00:47.992 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:00:47.992 [3/745] Generating lib/rte_kvargs_def with a custom command 00:00:47.992 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:00:47.992 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:47.992 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:47.992 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:47.992 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:47.992 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:47.992 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:47.992 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:47.992 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:47.992 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:47.992 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:47.992 [15/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:48.251 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:48.251 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:48.251 [18/745] Linking static target lib/librte_kvargs.a 00:00:48.251 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:48.251 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:48.251 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:48.251 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:48.251 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:48.251 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:48.251 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:48.251 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:48.251 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:48.251 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:48.251 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:48.251 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:48.251 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:48.251 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:00:48.251 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:48.251 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:48.251 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:48.251 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:48.251 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:48.251 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:48.251 [39/745] Generating lib/rte_eal_mingw with a custom command 00:00:48.251 [40/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:48.251 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:48.251 [42/745] Generating lib/rte_eal_def with a custom command 00:00:48.251 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:48.251 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:48.251 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:48.251 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:48.251 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:48.251 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:48.251 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:48.251 [50/745] Generating lib/rte_rcu_mingw with a custom command 00:00:48.251 [51/745] Generating lib/rte_ring_def with a custom command 00:00:48.251 [52/745] Generating lib/rte_ring_mingw with a custom command 00:00:48.251 [53/745] Generating lib/rte_rcu_def with a custom command 00:00:48.251 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:48.251 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:48.251 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:48.251 [57/745] Generating lib/rte_mempool_def with a custom command 00:00:48.251 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:00:48.251 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:48.251 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:00:48.251 [61/745] Generating lib/rte_mbuf_mingw with a custom command 00:00:48.251 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:48.251 [63/745] Generating lib/rte_mbuf_def with a custom command 00:00:48.251 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:48.512 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:48.512 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:48.512 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:48.512 [68/745] Generating lib/rte_net_mingw with a custom command 00:00:48.512 [69/745] Generating lib/rte_net_def with a custom command 00:00:48.513 [70/745] Generating lib/rte_meter_def with a custom command 00:00:48.513 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:48.513 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:48.513 [73/745] Generating lib/rte_meter_mingw with a custom command 00:00:48.513 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:48.513 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:48.513 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:48.513 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:48.513 [78/745] Generating lib/rte_ethdev_def with a custom command 00:00:48.513 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.513 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:48.513 [81/745] Generating lib/rte_ethdev_mingw with a custom command 00:00:48.513 [82/745] Linking static target lib/librte_ring.a 00:00:48.513 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:48.513 [84/745] Linking target lib/librte_kvargs.so.23.0 00:00:48.772 [85/745] Generating lib/rte_pci_def with a custom command 00:00:48.772 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:48.772 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:48.772 [88/745] Linking static target lib/librte_meter.a 00:00:48.772 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:48.772 [90/745] Generating lib/rte_pci_mingw with a custom command 00:00:48.772 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:48.772 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:48.772 [93/745] Linking static target lib/librte_pci.a 00:00:48.772 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:48.772 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:00:48.772 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:48.772 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:48.772 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:49.037 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.037 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:49.037 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.037 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:49.037 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:49.037 [104/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:49.037 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:49.037 [106/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.037 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:49.037 [108/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:49.037 [109/745] Linking static target lib/librte_telemetry.a 00:00:49.037 [110/745] Generating lib/rte_cmdline_def with a custom command 00:00:49.037 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:00:49.037 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:49.037 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:49.037 [114/745] Generating lib/rte_metrics_def with a custom command 00:00:49.037 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:00:49.037 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:49.037 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:49.300 [118/745] Generating lib/rte_hash_def with a custom command 00:00:49.300 [119/745] Generating lib/rte_hash_mingw with a custom command 00:00:49.300 [120/745] Generating lib/rte_timer_def with a custom command 00:00:49.300 [121/745] Generating lib/rte_timer_mingw with a custom command 00:00:49.300 [122/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:49.300 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:49.300 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:49.300 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:49.560 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:49.560 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:49.560 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:49.560 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:49.560 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:49.560 [131/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:49.560 [132/745] Generating lib/rte_acl_def with a custom command 00:00:49.560 [133/745] Generating lib/rte_acl_mingw with a custom command 00:00:49.560 [134/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:49.560 [135/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:49.560 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:00:49.560 [137/745] Generating lib/rte_bbdev_def with a custom command 00:00:49.560 [138/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.560 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:00:49.560 [140/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:49.560 [141/745] Generating lib/rte_bitratestats_mingw with a custom command 00:00:49.560 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:49.560 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:49.560 [144/745] Linking target lib/librte_telemetry.so.23.0 00:00:49.826 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:49.826 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:49.826 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:49.826 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:49.826 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:49.826 [150/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:49.826 [151/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:49.826 [152/745] Generating lib/rte_bpf_def with a custom command 00:00:49.826 [153/745] Generating lib/rte_bpf_mingw with a custom command 00:00:49.826 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:49.826 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:49.826 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:00:49.826 [157/745] Generating lib/rte_cfgfile_mingw with a custom command 00:00:49.826 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:00:49.826 [159/745] Generating lib/rte_compressdev_def with a custom command 00:00:49.826 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:49.826 [161/745] Generating lib/rte_compressdev_mingw with a custom command 00:00:50.087 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:50.087 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:00:50.087 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:00:50.087 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:50.087 [166/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:50.087 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:50.087 [168/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:50.087 [169/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:50.087 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:50.087 [171/745] Generating lib/rte_distributor_def with a custom command 00:00:50.087 [172/745] Linking static target lib/librte_rcu.a 00:00:50.087 [173/745] Linking static target lib/librte_timer.a 00:00:50.087 [174/745] Linking static target lib/librte_cmdline.a 00:00:50.087 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:50.087 [176/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:50.087 [177/745] Linking static target lib/librte_net.a 00:00:50.087 [178/745] Generating lib/rte_distributor_mingw with a custom command 00:00:50.087 [179/745] Generating lib/rte_efd_def with a custom command 00:00:50.087 [180/745] Generating lib/rte_efd_mingw with a custom command 00:00:50.348 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:50.348 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:50.348 [183/745] Linking static target lib/librte_mempool.a 00:00:50.348 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:50.348 [185/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:50.348 [186/745] Linking static target lib/librte_cfgfile.a 00:00:50.348 [187/745] Linking static target lib/librte_metrics.a 00:00:50.610 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.610 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:50.610 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.610 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.610 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:50.610 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:50.610 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:50.610 [195/745] Generating lib/rte_eventdev_def with a custom command 00:00:50.610 [196/745] Linking static target lib/librte_eal.a 00:00:50.872 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:00:50.872 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:50.872 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:50.872 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:50.872 [201/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:50.872 [202/745] Generating lib/rte_gpudev_def with a custom command 00:00:50.872 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:00:50.872 [204/745] Linking static target lib/librte_bitratestats.a 00:00:50.872 [205/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:50.872 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:50.872 [207/745] Generating lib/rte_gro_def with a custom command 00:00:50.872 [208/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.872 [209/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:50.872 [210/745] Generating lib/rte_gro_mingw with a custom command 00:00:50.872 [211/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.133 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:51.133 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:51.133 [214/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.133 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:51.133 [216/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:51.133 [217/745] Generating lib/rte_gso_def with a custom command 00:00:51.394 [218/745] Generating lib/rte_gso_mingw with a custom command 00:00:51.394 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:51.394 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:51.394 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:51.394 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:51.394 [223/745] Generating lib/rte_ip_frag_def with a custom command 00:00:51.394 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:51.394 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:51.394 [226/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:51.394 [227/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.394 [228/745] Linking static target lib/librte_bbdev.a 00:00:51.659 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:00:51.659 [230/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.659 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:00:51.659 [232/745] Generating lib/rte_jobstats_def with a custom command 00:00:51.659 [233/745] Generating lib/rte_latencystats_def with a custom command 00:00:51.659 [234/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:51.659 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:00:51.659 [236/745] Generating lib/rte_lpm_def with a custom command 00:00:51.659 [237/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:51.659 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:00:51.659 [239/745] Linking static target lib/librte_compressdev.a 00:00:51.659 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:51.659 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:51.920 [242/745] Linking static target lib/librte_jobstats.a 00:00:51.920 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:51.920 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:52.181 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:52.181 [246/745] Linking static target lib/librte_distributor.a 00:00:52.181 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:52.181 [248/745] Generating lib/rte_member_def with a custom command 00:00:52.181 [249/745] Generating lib/rte_member_mingw with a custom command 00:00:52.181 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:52.181 [251/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.182 [252/745] Generating lib/rte_pcapng_def with a custom command 00:00:52.182 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:52.182 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:00:52.442 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:52.443 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:52.443 [257/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:52.443 [258/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.443 [259/745] Linking static target lib/librte_bpf.a 00:00:52.443 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:52.443 [261/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:52.443 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:52.443 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:52.443 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:52.443 [265/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:52.443 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:52.443 [267/745] Linking static target lib/librte_gpudev.a 00:00:52.443 [268/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:52.443 [269/745] Generating lib/rte_power_def with a custom command 00:00:52.443 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:52.443 [271/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.443 [272/745] Generating lib/rte_power_mingw with a custom command 00:00:52.443 [273/745] Linking static target lib/librte_gro.a 00:00:52.702 [274/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:52.702 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:52.702 [276/745] Generating lib/rte_rawdev_def with a custom command 00:00:52.702 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:00:52.702 [278/745] Generating lib/rte_regexdev_def with a custom command 00:00:52.702 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:00:52.702 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:52.702 [281/745] Generating lib/rte_dmadev_def with a custom command 00:00:52.702 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:00:52.702 [283/745] Generating lib/rte_rib_mingw with a custom command 00:00:52.702 [284/745] Generating lib/rte_rib_def with a custom command 00:00:52.702 [285/745] Generating lib/rte_reorder_def with a custom command 00:00:52.966 [286/745] Generating lib/rte_reorder_mingw with a custom command 00:00:52.966 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:52.966 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:00:52.966 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.966 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.966 [291/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:52.966 [292/745] Generating lib/rte_sched_def with a custom command 00:00:52.966 [293/745] Generating lib/rte_sched_mingw with a custom command 00:00:52.966 [294/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:52.966 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:52.966 [296/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:00:52.966 [297/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.966 [298/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:52.966 [299/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:52.966 [300/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:00:52.966 [301/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:53.228 [302/745] Linking static target lib/librte_latencystats.a 00:00:53.228 [303/745] Generating lib/rte_security_def with a custom command 00:00:53.228 [304/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:53.228 [305/745] Generating lib/rte_security_mingw with a custom command 00:00:53.228 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:53.228 [307/745] Generating lib/rte_stack_def with a custom command 00:00:53.228 [308/745] Generating lib/rte_stack_mingw with a custom command 00:00:53.228 [309/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:53.228 [310/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:53.228 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:53.228 [312/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:53.228 [313/745] Linking static target lib/librte_rawdev.a 00:00:53.228 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:53.228 [315/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:53.228 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:53.228 [317/745] Linking static target lib/librte_stack.a 00:00:53.228 [318/745] Generating lib/rte_vhost_def with a custom command 00:00:53.228 [319/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:53.228 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:00:53.489 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:53.489 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:53.489 [323/745] Linking static target lib/librte_dmadev.a 00:00:53.489 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.489 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:53.489 [326/745] Linking static target lib/librte_ip_frag.a 00:00:53.489 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:53.489 [328/745] Generating lib/rte_ipsec_def with a custom command 00:00:53.749 [329/745] Generating lib/rte_ipsec_mingw with a custom command 00:00:53.749 [330/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:53.749 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.749 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:53.749 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:00:54.011 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.011 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:54.011 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.011 [337/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.011 [338/745] Generating lib/rte_fib_def with a custom command 00:00:54.011 [339/745] Generating lib/rte_fib_mingw with a custom command 00:00:54.011 [340/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:54.011 [341/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:54.011 [342/745] Linking static target lib/librte_gso.a 00:00:54.011 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:54.269 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:54.269 [345/745] Linking static target lib/librte_regexdev.a 00:00:54.269 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.269 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:54.269 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:54.269 [349/745] Linking static target lib/librte_efd.a 00:00:54.533 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.533 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:54.533 [352/745] Linking static target lib/librte_pcapng.a 00:00:54.533 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:54.533 [354/745] Linking static target lib/librte_lpm.a 00:00:54.533 [355/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:54.533 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:54.533 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:54.533 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:54.793 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:54.793 [360/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:54.793 [361/745] Linking static target lib/librte_reorder.a 00:00:54.793 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:54.793 [363/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:54.793 [364/745] Generating lib/rte_port_mingw with a custom command 00:00:54.793 [365/745] Generating lib/rte_port_def with a custom command 00:00:54.793 [366/745] Generating lib/rte_pdump_def with a custom command 00:00:55.051 [367/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:55.051 [368/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:00:55.051 [369/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:00:55.051 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:00:55.051 [371/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:55.051 [372/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:55.051 [373/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:55.051 [374/745] Linking static target lib/librte_security.a 00:00:55.051 [375/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.051 [376/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:55.051 [377/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:55.051 [378/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:00:55.051 [379/745] Linking static target lib/acl/libavx2_tmp.a 00:00:55.051 [380/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:00:55.051 [381/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:55.051 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.051 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:55.051 [384/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.051 [385/745] Linking static target lib/librte_power.a 00:00:55.314 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:55.314 [387/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.314 [388/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:55.314 [389/745] Linking static target lib/librte_hash.a 00:00:55.574 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:55.574 [391/745] Linking static target lib/librte_rib.a 00:00:55.574 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:55.574 [393/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:55.574 [394/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:00:55.574 [395/745] Linking static target lib/acl/libavx512_tmp.a 00:00:55.574 [396/745] Linking static target lib/librte_acl.a 00:00:55.574 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:55.574 [398/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.574 [399/745] Generating lib/rte_table_def with a custom command 00:00:55.841 [400/745] Generating lib/rte_table_mingw with a custom command 00:00:55.841 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:55.841 [402/745] Linking static target lib/librte_ethdev.a 00:00:55.841 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:56.140 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.140 [405/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:56.140 [406/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.140 [407/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:56.406 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:56.406 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:56.406 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:56.406 [411/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:56.406 [412/745] Generating lib/rte_pipeline_def with a custom command 00:00:56.406 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:00:56.406 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:56.406 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:56.406 [416/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:56.406 [417/745] Linking static target lib/librte_fib.a 00:00:56.406 [418/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:56.406 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:56.406 [420/745] Linking static target lib/librte_mbuf.a 00:00:56.406 [421/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:56.406 [422/745] Generating lib/rte_graph_def with a custom command 00:00:56.406 [423/745] Generating lib/rte_graph_mingw with a custom command 00:00:56.406 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.663 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:56.663 [426/745] Linking static target lib/librte_member.a 00:00:56.663 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:56.663 [428/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:56.663 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:56.663 [430/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.663 [431/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:56.663 [432/745] Linking static target lib/librte_eventdev.a 00:00:56.663 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:56.663 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:56.663 [435/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:56.924 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:56.924 [437/745] Generating lib/rte_node_mingw with a custom command 00:00:56.924 [438/745] Generating lib/rte_node_def with a custom command 00:00:56.924 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:56.924 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:56.924 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:56.924 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:56.924 [443/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:56.924 [444/745] Linking static target lib/librte_sched.a 00:00:57.184 [445/745] Generating drivers/rte_bus_pci_def with a custom command 00:00:57.184 [446/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:00:57.184 [447/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:00:57.184 [448/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:57.184 [449/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.184 [450/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:57.184 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:00:57.184 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:00:57.184 [453/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.184 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:57.184 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:00:57.184 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:00:57.446 [457/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:57.446 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:57.446 [459/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:57.446 [460/745] Linking static target lib/librte_cryptodev.a 00:00:57.446 [461/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:57.446 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:00:57.446 [463/745] Linking static target lib/librte_pdump.a 00:00:57.446 [464/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:57.446 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:00:57.446 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:57.704 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:57.704 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:57.704 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:57.704 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:00:57.704 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:00:57.704 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:57.704 [473/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:57.704 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:00:57.704 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:57.704 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.704 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:00:57.963 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:00:57.963 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:00:57.963 [480/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.963 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:00:57.963 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:00:57.963 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:00:57.963 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:00:57.963 [485/745] Linking static target lib/librte_table.a 00:00:57.963 [486/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:00:57.963 [487/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:00:58.224 [488/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:58.224 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:00:58.224 [490/745] Linking static target drivers/librte_bus_vdev.a 00:00:58.224 [491/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:58.224 [492/745] Linking static target lib/librte_ipsec.a 00:00:58.485 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:00:58.485 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:00:58.485 [495/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:00:58.485 [496/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.485 [497/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:00:58.485 [498/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:00:58.749 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:00:58.749 [500/745] Linking static target lib/librte_graph.a 00:00:58.749 [501/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:00:58.749 [502/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.749 [503/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:00:58.749 [504/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:00:58.749 [505/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:00:58.749 [506/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:00:58.749 [507/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:58.749 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:58.749 [509/745] Linking static target drivers/librte_bus_pci.a 00:00:58.749 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:00:59.010 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:00:59.010 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:00:59.272 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:00:59.272 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.538 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:00:59.538 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.538 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:00:59.538 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:00:59.538 [519/745] Linking static target lib/librte_port.a 00:00:59.796 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:00:59.796 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:00:59.796 [522/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:00:59.796 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:00:59.796 [524/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.796 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:00:59.796 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:00.062 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.324 [528/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:00.324 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:00.324 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:00.324 [531/745] Linking static target drivers/librte_mempool_ring.a 00:01:00.324 [532/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:00.324 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:00.324 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:00.324 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:00.324 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:00.587 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:00.587 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.587 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:00.869 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:00.869 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.869 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:01.127 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:01.127 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:01.127 [545/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:01.127 [546/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:01.388 [547/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:01.388 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:01.388 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:01.388 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:01.388 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:01.646 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:01.646 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:02.245 [554/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:02.245 [555/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:02.245 [556/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:02.245 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:02.245 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:02.245 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:02.515 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:02.516 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:02.516 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:02.516 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:02.777 [564/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:02.777 [565/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:02.777 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:02.777 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:02.777 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:03.040 [569/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:03.040 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:03.040 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:03.040 [572/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:03.040 [573/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:03.040 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:03.298 [575/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.298 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:03.298 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:03.562 [578/745] Linking target lib/librte_eal.so.23.0 00:01:03.562 [579/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:03.562 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:03.562 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:03.562 [582/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:03.562 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:03.562 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:03.562 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:03.562 [586/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:03.821 [587/745] Linking target lib/librte_ring.so.23.0 00:01:03.821 [588/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:04.086 [589/745] Linking target lib/librte_meter.so.23.0 00:01:04.086 [590/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:04.086 [591/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.086 [592/745] Linking target lib/librte_rcu.so.23.0 00:01:04.086 [593/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:04.086 [594/745] Linking target lib/librte_mempool.so.23.0 00:01:04.086 [595/745] Linking target lib/librte_pci.so.23.0 00:01:04.347 [596/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:04.347 [597/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:04.347 [598/745] Linking target lib/librte_timer.so.23.0 00:01:04.347 [599/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:04.347 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:04.347 [601/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:04.347 [602/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:04.347 [603/745] Linking target lib/librte_acl.so.23.0 00:01:04.347 [604/745] Linking target lib/librte_mbuf.so.23.0 00:01:04.347 [605/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:04.347 [606/745] Linking target lib/librte_cfgfile.so.23.0 00:01:04.347 [607/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:04.614 [608/745] Linking target lib/librte_jobstats.so.23.0 00:01:04.614 [609/745] Linking target lib/librte_rawdev.so.23.0 00:01:04.614 [610/745] Linking target lib/librte_dmadev.so.23.0 00:01:04.614 [611/745] Linking target lib/librte_rib.so.23.0 00:01:04.614 [612/745] Linking target lib/librte_stack.so.23.0 00:01:04.614 [613/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:04.614 [614/745] Linking target lib/librte_graph.so.23.0 00:01:04.614 [615/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:04.614 [616/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:04.614 [617/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:04.614 [618/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:04.614 [619/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:04.614 [620/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:04.614 [621/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:04.614 [622/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:04.614 [623/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:04.872 [624/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:04.872 [625/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:04.872 [626/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:04.872 [627/745] Linking target lib/librte_bbdev.so.23.0 00:01:04.872 [628/745] Linking target lib/librte_compressdev.so.23.0 00:01:04.872 [629/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:04.872 [630/745] Linking target lib/librte_cryptodev.so.23.0 00:01:04.872 [631/745] Linking target lib/librte_net.so.23.0 00:01:04.872 [632/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:04.872 [633/745] Linking target lib/librte_distributor.so.23.0 00:01:04.872 [634/745] Linking target lib/librte_gpudev.so.23.0 00:01:04.872 [635/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:04.872 [636/745] Linking target lib/librte_reorder.so.23.0 00:01:04.872 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:04.872 [638/745] Linking target lib/librte_regexdev.so.23.0 00:01:04.872 [639/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:04.872 [640/745] Linking target lib/librte_fib.so.23.0 00:01:04.872 [641/745] Linking target lib/librte_sched.so.23.0 00:01:04.872 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:04.872 [643/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:05.129 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:05.129 [645/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:05.129 [646/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:05.129 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:05.129 [648/745] Linking target lib/librte_hash.so.23.0 00:01:05.129 [649/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:05.129 [650/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:05.129 [651/745] Linking target lib/librte_security.so.23.0 00:01:05.129 [652/745] Linking target lib/librte_cmdline.so.23.0 00:01:05.129 [653/745] Linking target lib/librte_ethdev.so.23.0 00:01:05.129 [654/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:05.129 [655/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:05.388 [656/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:05.388 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:05.388 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:05.388 [659/745] Linking target lib/librte_efd.so.23.0 00:01:05.388 [660/745] Linking target lib/librte_member.so.23.0 00:01:05.388 [661/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:05.388 [662/745] Linking target lib/librte_lpm.so.23.0 00:01:05.388 [663/745] Linking target lib/librte_ipsec.so.23.0 00:01:05.388 [664/745] Linking target lib/librte_metrics.so.23.0 00:01:05.388 [665/745] Linking target lib/librte_eventdev.so.23.0 00:01:05.388 [666/745] Linking target lib/librte_pcapng.so.23.0 00:01:05.388 [667/745] Linking target lib/librte_power.so.23.0 00:01:05.388 [668/745] Linking target lib/librte_bpf.so.23.0 00:01:05.388 [669/745] Linking target lib/librte_ip_frag.so.23.0 00:01:05.388 [670/745] Linking target lib/librte_gso.so.23.0 00:01:05.388 [671/745] Linking target lib/librte_gro.so.23.0 00:01:05.388 [672/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:05.388 [673/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:05.388 [674/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:05.647 [675/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:05.647 [676/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:05.647 [677/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:05.647 [678/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:05.647 [679/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:05.647 [680/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:05.647 [681/745] Linking target lib/librte_latencystats.so.23.0 00:01:05.647 [682/745] Linking target lib/librte_bitratestats.so.23.0 00:01:05.647 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:05.647 [684/745] Linking target lib/librte_pdump.so.23.0 00:01:05.647 [685/745] Linking target lib/librte_port.so.23.0 00:01:05.647 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:05.904 [687/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:05.904 [688/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:05.904 [689/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:05.904 [690/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:05.904 [691/745] Linking target lib/librte_table.so.23.0 00:01:05.904 [692/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:06.161 [693/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:06.161 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:06.161 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:06.161 [696/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:06.418 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:06.676 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:06.676 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:06.676 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:06.934 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:06.934 [702/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:07.191 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:07.191 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:07.191 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:07.191 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:07.191 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:07.191 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:07.756 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:07.756 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:07.756 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.014 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:08.946 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:08.946 [714/745] Linking static target lib/librte_node.a 00:01:09.203 [715/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:09.203 [716/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.203 [717/745] Linking target lib/librte_node.so.23.0 00:01:10.134 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:10.698 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:18.800 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:50.899 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.899 [722/745] Linking static target lib/librte_vhost.a 00:01:50.899 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.899 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:00.870 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:00.870 [726/745] Linking static target lib/librte_pipeline.a 00:02:01.435 [727/745] Linking target app/dpdk-test-sad 00:02:01.435 [728/745] Linking target app/dpdk-test-fib 00:02:01.435 [729/745] Linking target app/dpdk-test-pipeline 00:02:01.435 [730/745] Linking target app/dpdk-test-gpudev 00:02:01.435 [731/745] Linking target app/dpdk-test-cmdline 00:02:01.435 [732/745] Linking target app/dpdk-dumpcap 00:02:01.435 [733/745] Linking target app/dpdk-pdump 00:02:01.435 [734/745] Linking target app/dpdk-test-acl 00:02:01.435 [735/745] Linking target app/dpdk-test-crypto-perf 00:02:01.435 [736/745] Linking target app/dpdk-test-flow-perf 00:02:01.435 [737/745] Linking target app/dpdk-test-regex 00:02:01.435 [738/745] Linking target app/dpdk-test-security-perf 00:02:01.435 [739/745] Linking target app/dpdk-proc-info 00:02:01.435 [740/745] Linking target app/dpdk-test-bbdev 00:02:01.435 [741/745] Linking target app/dpdk-test-eventdev 00:02:01.435 [742/745] Linking target app/dpdk-test-compress-perf 00:02:01.435 [743/745] Linking target app/dpdk-testpmd 00:02:03.339 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.339 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:03.339 01:30:27 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:03.339 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:03.339 [0/1] Installing files. 00:02:03.600 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.601 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.603 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.604 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.866 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:03.866 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:04.125 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:04.125 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:04.125 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.125 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:04.125 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.125 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.390 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.391 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:04.392 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:04.392 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:04.392 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:04.392 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:04.392 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:04.392 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:04.392 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:04.392 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:04.392 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:04.392 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:04.392 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:04.392 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:04.392 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:04.392 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:04.392 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:04.392 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:04.392 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:04.392 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:04.392 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:04.392 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:04.392 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:04.392 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:04.392 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:04.392 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:04.392 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:04.392 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:04.392 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:04.392 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:04.392 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:04.392 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:04.392 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:04.392 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:04.392 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:04.392 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:04.392 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:04.392 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:04.392 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:04.392 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:04.392 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:04.392 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:04.392 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:04.392 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:04.392 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:04.392 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:04.392 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:04.392 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:04.392 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:04.392 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:04.392 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:04.392 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:04.392 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:04.393 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:04.393 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:04.393 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:04.393 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:04.393 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:04.393 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:04.393 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:04.393 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:04.393 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:04.393 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:04.393 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:04.393 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:04.393 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:04.393 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:04.393 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:04.393 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:04.393 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:04.393 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:04.393 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:04.393 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:04.393 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:04.393 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:04.393 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:04.393 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:04.393 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:04.393 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:04.393 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:04.393 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:04.393 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:04.393 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:04.393 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:04.393 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:04.393 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:04.393 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:04.393 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:04.393 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:04.393 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:04.393 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:04.393 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:04.393 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:04.393 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:04.393 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:04.393 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:04.393 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:04.393 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:04.393 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:04.393 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:04.393 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:04.393 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:04.393 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:04.393 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:04.393 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:04.393 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:04.393 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:04.393 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:04.393 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:04.393 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:04.393 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:04.393 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:04.393 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:04.393 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:04.393 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:04.393 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:04.393 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:04.393 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:04.393 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:04.393 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:04.393 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:04.393 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:04.393 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:04.393 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:04.393 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:04.393 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:04.393 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:04.393 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:04.393 01:30:28 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:04.393 01:30:28 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:04.393 01:30:28 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:04.393 01:30:28 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:04.393 00:02:04.393 real 1m21.692s 00:02:04.393 user 14m35.468s 00:02:04.393 sys 1m50.094s 00:02:04.393 01:30:28 build_native_dpdk -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:04.393 01:30:28 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:04.394 ************************************ 00:02:04.394 END TEST build_native_dpdk 00:02:04.394 ************************************ 00:02:04.394 01:30:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.394 01:30:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.394 01:30:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.394 01:30:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.394 01:30:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:04.394 01:30:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:04.394 01:30:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:04.394 01:30:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:04.394 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:04.653 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:04.653 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:04.653 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:04.911 Using 'verbs' RDMA provider 00:02:15.447 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:25.419 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:25.419 Creating mk/config.mk...done. 00:02:25.419 Creating mk/cc.flags.mk...done. 00:02:25.419 Type 'make' to build. 00:02:25.420 01:30:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:25.420 01:30:47 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:02:25.420 01:30:47 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:02:25.420 01:30:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.420 ************************************ 00:02:25.420 START TEST make 00:02:25.420 ************************************ 00:02:25.420 01:30:47 make -- common/autotest_common.sh@1122 -- $ make -j48 00:02:25.420 make[1]: Nothing to be done for 'all'. 00:02:25.997 The Meson build system 00:02:25.997 Version: 1.3.1 00:02:25.997 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:25.997 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:25.997 Build type: native build 00:02:25.997 Project name: libvfio-user 00:02:25.997 Project version: 0.0.1 00:02:25.997 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:25.997 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:25.997 Host machine cpu family: x86_64 00:02:25.997 Host machine cpu: x86_64 00:02:25.997 Run-time dependency threads found: YES 00:02:25.997 Library dl found: YES 00:02:25.997 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:25.997 Run-time dependency json-c found: YES 0.17 00:02:25.997 Run-time dependency cmocka found: YES 1.1.7 00:02:25.997 Program pytest-3 found: NO 00:02:25.997 Program flake8 found: NO 00:02:25.997 Program misspell-fixer found: NO 00:02:25.997 Program restructuredtext-lint found: NO 00:02:25.997 Program valgrind found: YES (/usr/bin/valgrind) 00:02:25.997 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.997 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.997 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.997 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.997 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:25.997 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:25.997 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:25.997 Build targets in project: 8 00:02:25.997 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:25.997 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:25.997 00:02:25.997 libvfio-user 0.0.1 00:02:25.997 00:02:25.997 User defined options 00:02:25.997 buildtype : debug 00:02:25.997 default_library: shared 00:02:25.997 libdir : /usr/local/lib 00:02:25.997 00:02:25.997 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.571 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:26.832 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:26.832 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:26.832 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:26.832 [4/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:26.832 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:26.832 [6/37] Compiling C object samples/null.p/null.c.o 00:02:26.832 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:26.832 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:26.832 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:26.832 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:26.832 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:26.832 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:26.832 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:26.832 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:27.092 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:27.092 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:27.092 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:27.092 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:27.092 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:27.092 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:27.092 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:27.092 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:27.092 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:27.092 [24/37] Compiling C object samples/server.p/server.c.o 00:02:27.092 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:27.092 [26/37] Compiling C object samples/client.p/client.c.o 00:02:27.092 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:27.092 [28/37] Linking target samples/client 00:02:27.092 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:27.361 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:27.361 [31/37] Linking target test/unit_tests 00:02:27.361 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:27.361 [33/37] Linking target samples/server 00:02:27.361 [34/37] Linking target samples/null 00:02:27.361 [35/37] Linking target samples/lspci 00:02:27.361 [36/37] Linking target samples/gpio-pci-idio-16 00:02:27.361 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:27.621 INFO: autodetecting backend as ninja 00:02:27.621 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.621 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:28.202 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:28.202 ninja: no work to do. 00:02:40.401 CC lib/ut_mock/mock.o 00:02:40.401 CC lib/log/log.o 00:02:40.401 CC lib/log/log_flags.o 00:02:40.401 CC lib/log/log_deprecated.o 00:02:40.401 CC lib/ut/ut.o 00:02:40.401 LIB libspdk_ut_mock.a 00:02:40.401 SO libspdk_ut_mock.so.6.0 00:02:40.401 LIB libspdk_log.a 00:02:40.401 LIB libspdk_ut.a 00:02:40.401 SO libspdk_ut.so.2.0 00:02:40.401 SO libspdk_log.so.7.0 00:02:40.401 SYMLINK libspdk_ut_mock.so 00:02:40.401 SYMLINK libspdk_ut.so 00:02:40.401 SYMLINK libspdk_log.so 00:02:40.401 CC lib/ioat/ioat.o 00:02:40.401 CC lib/dma/dma.o 00:02:40.401 CXX lib/trace_parser/trace.o 00:02:40.401 CC lib/util/base64.o 00:02:40.401 CC lib/util/bit_array.o 00:02:40.401 CC lib/util/cpuset.o 00:02:40.401 CC lib/util/crc16.o 00:02:40.401 CC lib/util/crc32.o 00:02:40.401 CC lib/util/crc32c.o 00:02:40.401 CC lib/util/crc32_ieee.o 00:02:40.401 CC lib/util/crc64.o 00:02:40.401 CC lib/util/dif.o 00:02:40.401 CC lib/util/fd.o 00:02:40.401 CC lib/util/file.o 00:02:40.401 CC lib/util/hexlify.o 00:02:40.401 CC lib/util/iov.o 00:02:40.401 CC lib/util/math.o 00:02:40.401 CC lib/util/pipe.o 00:02:40.401 CC lib/util/strerror_tls.o 00:02:40.401 CC lib/util/string.o 00:02:40.401 CC lib/util/uuid.o 00:02:40.401 CC lib/util/fd_group.o 00:02:40.401 CC lib/util/xor.o 00:02:40.401 CC lib/util/zipf.o 00:02:40.401 CC lib/vfio_user/host/vfio_user_pci.o 00:02:40.401 CC lib/vfio_user/host/vfio_user.o 00:02:40.401 LIB libspdk_dma.a 00:02:40.401 SO libspdk_dma.so.4.0 00:02:40.401 SYMLINK libspdk_dma.so 00:02:40.401 LIB libspdk_ioat.a 00:02:40.401 SO libspdk_ioat.so.7.0 00:02:40.401 LIB libspdk_vfio_user.a 00:02:40.401 SYMLINK libspdk_ioat.so 00:02:40.401 SO libspdk_vfio_user.so.5.0 00:02:40.401 SYMLINK libspdk_vfio_user.so 00:02:40.401 LIB libspdk_util.a 00:02:40.658 SO libspdk_util.so.9.0 00:02:40.658 SYMLINK libspdk_util.so 00:02:40.915 CC lib/vmd/vmd.o 00:02:40.915 CC lib/idxd/idxd.o 00:02:40.915 CC lib/json/json_parse.o 00:02:40.915 CC lib/env_dpdk/env.o 00:02:40.915 CC lib/conf/conf.o 00:02:40.915 CC lib/rdma/common.o 00:02:40.915 CC lib/vmd/led.o 00:02:40.915 CC lib/idxd/idxd_user.o 00:02:40.915 CC lib/json/json_util.o 00:02:40.915 CC lib/rdma/rdma_verbs.o 00:02:40.915 CC lib/env_dpdk/memory.o 00:02:40.915 CC lib/json/json_write.o 00:02:40.915 CC lib/env_dpdk/pci.o 00:02:40.915 CC lib/env_dpdk/init.o 00:02:40.915 CC lib/env_dpdk/threads.o 00:02:40.915 CC lib/env_dpdk/pci_ioat.o 00:02:40.915 CC lib/env_dpdk/pci_virtio.o 00:02:40.915 CC lib/env_dpdk/pci_vmd.o 00:02:40.915 CC lib/env_dpdk/pci_idxd.o 00:02:40.915 CC lib/env_dpdk/pci_event.o 00:02:40.915 CC lib/env_dpdk/sigbus_handler.o 00:02:40.915 CC lib/env_dpdk/pci_dpdk.o 00:02:40.915 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:40.915 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.915 LIB libspdk_trace_parser.a 00:02:40.915 SO libspdk_trace_parser.so.5.0 00:02:40.915 SYMLINK libspdk_trace_parser.so 00:02:41.171 LIB libspdk_conf.a 00:02:41.171 SO libspdk_conf.so.6.0 00:02:41.171 LIB libspdk_json.a 00:02:41.171 SYMLINK libspdk_conf.so 00:02:41.171 SO libspdk_json.so.6.0 00:02:41.428 LIB libspdk_rdma.a 00:02:41.428 SYMLINK libspdk_json.so 00:02:41.428 SO libspdk_rdma.so.6.0 00:02:41.428 SYMLINK libspdk_rdma.so 00:02:41.428 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.428 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.428 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.428 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.428 LIB libspdk_idxd.a 00:02:41.686 SO libspdk_idxd.so.12.0 00:02:41.686 LIB libspdk_vmd.a 00:02:41.686 SO libspdk_vmd.so.6.0 00:02:41.686 SYMLINK libspdk_idxd.so 00:02:41.686 SYMLINK libspdk_vmd.so 00:02:41.686 LIB libspdk_jsonrpc.a 00:02:41.686 SO libspdk_jsonrpc.so.6.0 00:02:41.944 SYMLINK libspdk_jsonrpc.so 00:02:41.944 CC lib/rpc/rpc.o 00:02:42.201 LIB libspdk_rpc.a 00:02:42.201 SO libspdk_rpc.so.6.0 00:02:42.201 SYMLINK libspdk_rpc.so 00:02:42.459 CC lib/keyring/keyring.o 00:02:42.459 CC lib/keyring/keyring_rpc.o 00:02:42.459 CC lib/notify/notify.o 00:02:42.459 CC lib/trace/trace.o 00:02:42.459 CC lib/notify/notify_rpc.o 00:02:42.459 CC lib/trace/trace_flags.o 00:02:42.459 CC lib/trace/trace_rpc.o 00:02:42.716 LIB libspdk_notify.a 00:02:42.716 SO libspdk_notify.so.6.0 00:02:42.716 LIB libspdk_keyring.a 00:02:42.716 SYMLINK libspdk_notify.so 00:02:42.716 SO libspdk_keyring.so.1.0 00:02:42.716 LIB libspdk_trace.a 00:02:42.716 SO libspdk_trace.so.10.0 00:02:42.716 SYMLINK libspdk_keyring.so 00:02:42.716 SYMLINK libspdk_trace.so 00:02:42.974 LIB libspdk_env_dpdk.a 00:02:42.974 CC lib/thread/thread.o 00:02:42.974 CC lib/thread/iobuf.o 00:02:42.974 CC lib/sock/sock.o 00:02:42.974 CC lib/sock/sock_rpc.o 00:02:42.974 SO libspdk_env_dpdk.so.14.0 00:02:43.232 SYMLINK libspdk_env_dpdk.so 00:02:43.232 LIB libspdk_sock.a 00:02:43.489 SO libspdk_sock.so.9.0 00:02:43.489 SYMLINK libspdk_sock.so 00:02:43.489 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:43.489 CC lib/nvme/nvme_ctrlr.o 00:02:43.489 CC lib/nvme/nvme_fabric.o 00:02:43.747 CC lib/nvme/nvme_ns_cmd.o 00:02:43.747 CC lib/nvme/nvme_ns.o 00:02:43.747 CC lib/nvme/nvme_pcie_common.o 00:02:43.747 CC lib/nvme/nvme_pcie.o 00:02:43.747 CC lib/nvme/nvme_qpair.o 00:02:43.747 CC lib/nvme/nvme.o 00:02:43.747 CC lib/nvme/nvme_quirks.o 00:02:43.747 CC lib/nvme/nvme_transport.o 00:02:43.747 CC lib/nvme/nvme_discovery.o 00:02:43.747 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:43.747 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:43.747 CC lib/nvme/nvme_tcp.o 00:02:43.747 CC lib/nvme/nvme_opal.o 00:02:43.747 CC lib/nvme/nvme_io_msg.o 00:02:43.747 CC lib/nvme/nvme_poll_group.o 00:02:43.747 CC lib/nvme/nvme_zns.o 00:02:43.747 CC lib/nvme/nvme_stubs.o 00:02:43.747 CC lib/nvme/nvme_auth.o 00:02:43.747 CC lib/nvme/nvme_cuse.o 00:02:43.747 CC lib/nvme/nvme_vfio_user.o 00:02:43.747 CC lib/nvme/nvme_rdma.o 00:02:44.680 LIB libspdk_thread.a 00:02:44.680 SO libspdk_thread.so.10.0 00:02:44.680 SYMLINK libspdk_thread.so 00:02:44.938 CC lib/virtio/virtio.o 00:02:44.938 CC lib/accel/accel.o 00:02:44.938 CC lib/vfu_tgt/tgt_endpoint.o 00:02:44.938 CC lib/blob/blobstore.o 00:02:44.938 CC lib/init/json_config.o 00:02:44.938 CC lib/virtio/virtio_vhost_user.o 00:02:44.938 CC lib/init/subsystem.o 00:02:44.938 CC lib/virtio/virtio_vfio_user.o 00:02:44.938 CC lib/blob/request.o 00:02:44.938 CC lib/accel/accel_rpc.o 00:02:44.938 CC lib/vfu_tgt/tgt_rpc.o 00:02:44.938 CC lib/init/subsystem_rpc.o 00:02:44.938 CC lib/blob/zeroes.o 00:02:44.938 CC lib/virtio/virtio_pci.o 00:02:44.938 CC lib/accel/accel_sw.o 00:02:44.938 CC lib/init/rpc.o 00:02:44.938 CC lib/blob/blob_bs_dev.o 00:02:45.195 LIB libspdk_init.a 00:02:45.195 SO libspdk_init.so.5.0 00:02:45.195 LIB libspdk_virtio.a 00:02:45.195 LIB libspdk_vfu_tgt.a 00:02:45.195 SYMLINK libspdk_init.so 00:02:45.195 SO libspdk_vfu_tgt.so.3.0 00:02:45.195 SO libspdk_virtio.so.7.0 00:02:45.195 SYMLINK libspdk_vfu_tgt.so 00:02:45.195 SYMLINK libspdk_virtio.so 00:02:45.453 CC lib/event/app.o 00:02:45.453 CC lib/event/reactor.o 00:02:45.453 CC lib/event/log_rpc.o 00:02:45.453 CC lib/event/app_rpc.o 00:02:45.453 CC lib/event/scheduler_static.o 00:02:45.712 LIB libspdk_event.a 00:02:45.712 SO libspdk_event.so.13.0 00:02:45.969 LIB libspdk_accel.a 00:02:45.969 SYMLINK libspdk_event.so 00:02:45.969 SO libspdk_accel.so.15.0 00:02:45.969 SYMLINK libspdk_accel.so 00:02:45.969 LIB libspdk_nvme.a 00:02:46.227 SO libspdk_nvme.so.13.0 00:02:46.227 CC lib/bdev/bdev.o 00:02:46.227 CC lib/bdev/bdev_rpc.o 00:02:46.227 CC lib/bdev/bdev_zone.o 00:02:46.227 CC lib/bdev/part.o 00:02:46.227 CC lib/bdev/scsi_nvme.o 00:02:46.486 SYMLINK libspdk_nvme.so 00:02:47.858 LIB libspdk_blob.a 00:02:47.858 SO libspdk_blob.so.11.0 00:02:47.858 SYMLINK libspdk_blob.so 00:02:48.115 CC lib/blobfs/blobfs.o 00:02:48.115 CC lib/blobfs/tree.o 00:02:48.115 CC lib/lvol/lvol.o 00:02:48.680 LIB libspdk_blobfs.a 00:02:48.938 SO libspdk_blobfs.so.10.0 00:02:48.938 SYMLINK libspdk_blobfs.so 00:02:48.938 LIB libspdk_lvol.a 00:02:48.938 SO libspdk_lvol.so.10.0 00:02:48.938 LIB libspdk_bdev.a 00:02:48.938 SYMLINK libspdk_lvol.so 00:02:48.938 SO libspdk_bdev.so.15.0 00:02:49.203 SYMLINK libspdk_bdev.so 00:02:49.203 CC lib/ublk/ublk.o 00:02:49.203 CC lib/ftl/ftl_core.o 00:02:49.203 CC lib/scsi/dev.o 00:02:49.203 CC lib/nbd/nbd.o 00:02:49.203 CC lib/ublk/ublk_rpc.o 00:02:49.203 CC lib/ftl/ftl_init.o 00:02:49.203 CC lib/scsi/lun.o 00:02:49.203 CC lib/nvmf/ctrlr.o 00:02:49.203 CC lib/scsi/port.o 00:02:49.203 CC lib/nbd/nbd_rpc.o 00:02:49.203 CC lib/ftl/ftl_layout.o 00:02:49.203 CC lib/nvmf/ctrlr_discovery.o 00:02:49.203 CC lib/ftl/ftl_debug.o 00:02:49.203 CC lib/scsi/scsi.o 00:02:49.203 CC lib/nvmf/ctrlr_bdev.o 00:02:49.203 CC lib/scsi/scsi_bdev.o 00:02:49.203 CC lib/nvmf/subsystem.o 00:02:49.203 CC lib/ftl/ftl_io.o 00:02:49.203 CC lib/nvmf/nvmf.o 00:02:49.203 CC lib/ftl/ftl_sb.o 00:02:49.203 CC lib/nvmf/nvmf_rpc.o 00:02:49.203 CC lib/scsi/scsi_pr.o 00:02:49.203 CC lib/scsi/scsi_rpc.o 00:02:49.203 CC lib/ftl/ftl_l2p.o 00:02:49.203 CC lib/nvmf/transport.o 00:02:49.203 CC lib/nvmf/tcp.o 00:02:49.203 CC lib/scsi/task.o 00:02:49.203 CC lib/ftl/ftl_l2p_flat.o 00:02:49.203 CC lib/ftl/ftl_band.o 00:02:49.203 CC lib/ftl/ftl_nv_cache.o 00:02:49.203 CC lib/nvmf/stubs.o 00:02:49.203 CC lib/nvmf/mdns_server.o 00:02:49.203 CC lib/ftl/ftl_band_ops.o 00:02:49.203 CC lib/ftl/ftl_writer.o 00:02:49.203 CC lib/nvmf/vfio_user.o 00:02:49.203 CC lib/nvmf/rdma.o 00:02:49.203 CC lib/ftl/ftl_rq.o 00:02:49.203 CC lib/nvmf/auth.o 00:02:49.203 CC lib/ftl/ftl_reloc.o 00:02:49.203 CC lib/ftl/ftl_l2p_cache.o 00:02:49.203 CC lib/ftl/ftl_p2l.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.203 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.775 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:49.775 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:49.775 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:49.775 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:49.775 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:49.775 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:49.775 CC lib/ftl/utils/ftl_conf.o 00:02:49.775 CC lib/ftl/utils/ftl_md.o 00:02:49.775 CC lib/ftl/utils/ftl_mempool.o 00:02:49.775 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.775 CC lib/ftl/utils/ftl_property.o 00:02:49.775 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.775 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.775 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.775 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.775 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.775 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.775 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.775 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.775 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:50.037 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:50.037 CC lib/ftl/base/ftl_base_dev.o 00:02:50.037 CC lib/ftl/base/ftl_base_bdev.o 00:02:50.037 CC lib/ftl/ftl_trace.o 00:02:50.037 LIB libspdk_nbd.a 00:02:50.037 SO libspdk_nbd.so.7.0 00:02:50.323 SYMLINK libspdk_nbd.so 00:02:50.323 LIB libspdk_scsi.a 00:02:50.323 LIB libspdk_ublk.a 00:02:50.323 SO libspdk_scsi.so.9.0 00:02:50.323 SO libspdk_ublk.so.3.0 00:02:50.323 SYMLINK libspdk_ublk.so 00:02:50.323 SYMLINK libspdk_scsi.so 00:02:50.581 CC lib/iscsi/conn.o 00:02:50.581 CC lib/vhost/vhost.o 00:02:50.581 CC lib/iscsi/init_grp.o 00:02:50.581 CC lib/vhost/vhost_rpc.o 00:02:50.581 CC lib/iscsi/iscsi.o 00:02:50.581 CC lib/vhost/vhost_scsi.o 00:02:50.581 CC lib/vhost/vhost_blk.o 00:02:50.581 CC lib/iscsi/md5.o 00:02:50.581 CC lib/iscsi/param.o 00:02:50.581 CC lib/vhost/rte_vhost_user.o 00:02:50.581 CC lib/iscsi/portal_grp.o 00:02:50.581 CC lib/iscsi/tgt_node.o 00:02:50.581 CC lib/iscsi/iscsi_subsystem.o 00:02:50.581 CC lib/iscsi/iscsi_rpc.o 00:02:50.581 CC lib/iscsi/task.o 00:02:50.581 LIB libspdk_ftl.a 00:02:50.839 SO libspdk_ftl.so.9.0 00:02:51.097 SYMLINK libspdk_ftl.so 00:02:51.662 LIB libspdk_vhost.a 00:02:51.920 SO libspdk_vhost.so.8.0 00:02:51.920 LIB libspdk_nvmf.a 00:02:51.920 SYMLINK libspdk_vhost.so 00:02:51.920 SO libspdk_nvmf.so.18.0 00:02:51.920 LIB libspdk_iscsi.a 00:02:51.920 SO libspdk_iscsi.so.8.0 00:02:52.178 SYMLINK libspdk_nvmf.so 00:02:52.178 SYMLINK libspdk_iscsi.so 00:02:52.436 CC module/env_dpdk/env_dpdk_rpc.o 00:02:52.436 CC module/vfu_device/vfu_virtio.o 00:02:52.436 CC module/vfu_device/vfu_virtio_blk.o 00:02:52.436 CC module/vfu_device/vfu_virtio_scsi.o 00:02:52.436 CC module/vfu_device/vfu_virtio_rpc.o 00:02:52.436 CC module/blob/bdev/blob_bdev.o 00:02:52.436 CC module/accel/ioat/accel_ioat.o 00:02:52.436 CC module/sock/posix/posix.o 00:02:52.436 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.436 CC module/accel/dsa/accel_dsa.o 00:02:52.436 CC module/accel/error/accel_error.o 00:02:52.436 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:52.436 CC module/accel/ioat/accel_ioat_rpc.o 00:02:52.436 CC module/accel/dsa/accel_dsa_rpc.o 00:02:52.436 CC module/keyring/file/keyring.o 00:02:52.436 CC module/accel/error/accel_error_rpc.o 00:02:52.436 CC module/keyring/file/keyring_rpc.o 00:02:52.436 CC module/scheduler/gscheduler/gscheduler.o 00:02:52.436 CC module/accel/iaa/accel_iaa.o 00:02:52.436 CC module/accel/iaa/accel_iaa_rpc.o 00:02:52.694 LIB libspdk_env_dpdk_rpc.a 00:02:52.694 SO libspdk_env_dpdk_rpc.so.6.0 00:02:52.694 SYMLINK libspdk_env_dpdk_rpc.so 00:02:52.694 LIB libspdk_keyring_file.a 00:02:52.694 LIB libspdk_scheduler_gscheduler.a 00:02:52.694 LIB libspdk_scheduler_dpdk_governor.a 00:02:52.694 SO libspdk_scheduler_gscheduler.so.4.0 00:02:52.694 SO libspdk_keyring_file.so.1.0 00:02:52.694 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:52.694 LIB libspdk_accel_error.a 00:02:52.694 LIB libspdk_accel_ioat.a 00:02:52.694 LIB libspdk_scheduler_dynamic.a 00:02:52.694 LIB libspdk_accel_iaa.a 00:02:52.694 SO libspdk_accel_error.so.2.0 00:02:52.694 SO libspdk_scheduler_dynamic.so.4.0 00:02:52.694 SO libspdk_accel_ioat.so.6.0 00:02:52.694 SYMLINK libspdk_scheduler_gscheduler.so 00:02:52.694 SYMLINK libspdk_keyring_file.so 00:02:52.694 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:52.952 SO libspdk_accel_iaa.so.3.0 00:02:52.952 LIB libspdk_accel_dsa.a 00:02:52.952 SYMLINK libspdk_accel_error.so 00:02:52.952 SYMLINK libspdk_scheduler_dynamic.so 00:02:52.952 SO libspdk_accel_dsa.so.5.0 00:02:52.952 SYMLINK libspdk_accel_ioat.so 00:02:52.952 LIB libspdk_blob_bdev.a 00:02:52.952 SYMLINK libspdk_accel_iaa.so 00:02:52.952 SO libspdk_blob_bdev.so.11.0 00:02:52.952 SYMLINK libspdk_accel_dsa.so 00:02:52.952 SYMLINK libspdk_blob_bdev.so 00:02:53.210 LIB libspdk_vfu_device.a 00:02:53.210 SO libspdk_vfu_device.so.3.0 00:02:53.210 CC module/bdev/gpt/gpt.o 00:02:53.210 CC module/bdev/gpt/vbdev_gpt.o 00:02:53.210 CC module/bdev/delay/vbdev_delay.o 00:02:53.210 CC module/bdev/null/bdev_null.o 00:02:53.210 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:53.210 CC module/bdev/nvme/bdev_nvme.o 00:02:53.210 CC module/bdev/malloc/bdev_malloc.o 00:02:53.210 CC module/bdev/null/bdev_null_rpc.o 00:02:53.210 CC module/bdev/error/vbdev_error.o 00:02:53.210 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:53.210 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.210 CC module/bdev/nvme/nvme_rpc.o 00:02:53.210 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:53.210 CC module/bdev/error/vbdev_error_rpc.o 00:02:53.210 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.210 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.210 CC module/blobfs/bdev/blobfs_bdev.o 00:02:53.210 CC module/bdev/raid/bdev_raid.o 00:02:53.210 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.210 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.210 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.210 CC module/bdev/split/vbdev_split.o 00:02:53.210 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.210 CC module/bdev/ftl/bdev_ftl.o 00:02:53.210 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.210 CC module/bdev/nvme/vbdev_opal.o 00:02:53.210 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:53.210 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.210 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.210 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.210 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.210 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.210 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.210 CC module/bdev/lvol/vbdev_lvol.o 00:02:53.210 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.210 CC module/bdev/raid/raid0.o 00:02:53.210 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.210 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:53.210 CC module/bdev/raid/raid1.o 00:02:53.210 CC module/bdev/raid/concat.o 00:02:53.210 CC module/bdev/aio/bdev_aio.o 00:02:53.210 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.210 SYMLINK libspdk_vfu_device.so 00:02:53.468 LIB libspdk_sock_posix.a 00:02:53.468 SO libspdk_sock_posix.so.6.0 00:02:53.468 LIB libspdk_blobfs_bdev.a 00:02:53.468 SYMLINK libspdk_sock_posix.so 00:02:53.725 SO libspdk_blobfs_bdev.so.6.0 00:02:53.725 SYMLINK libspdk_blobfs_bdev.so 00:02:53.725 LIB libspdk_bdev_split.a 00:02:53.726 LIB libspdk_bdev_ftl.a 00:02:53.726 LIB libspdk_bdev_null.a 00:02:53.726 SO libspdk_bdev_split.so.6.0 00:02:53.726 LIB libspdk_bdev_passthru.a 00:02:53.726 LIB libspdk_bdev_gpt.a 00:02:53.726 SO libspdk_bdev_ftl.so.6.0 00:02:53.726 SO libspdk_bdev_null.so.6.0 00:02:53.726 SO libspdk_bdev_passthru.so.6.0 00:02:53.726 LIB libspdk_bdev_error.a 00:02:53.726 LIB libspdk_bdev_aio.a 00:02:53.726 SO libspdk_bdev_gpt.so.6.0 00:02:53.726 SYMLINK libspdk_bdev_split.so 00:02:53.726 SO libspdk_bdev_error.so.6.0 00:02:53.726 SO libspdk_bdev_aio.so.6.0 00:02:53.726 LIB libspdk_bdev_malloc.a 00:02:53.726 SYMLINK libspdk_bdev_ftl.so 00:02:53.726 SYMLINK libspdk_bdev_null.so 00:02:53.726 LIB libspdk_bdev_zone_block.a 00:02:53.726 SYMLINK libspdk_bdev_passthru.so 00:02:53.726 SYMLINK libspdk_bdev_gpt.so 00:02:53.726 LIB libspdk_bdev_delay.a 00:02:53.726 SO libspdk_bdev_malloc.so.6.0 00:02:53.726 LIB libspdk_bdev_iscsi.a 00:02:53.726 SO libspdk_bdev_zone_block.so.6.0 00:02:53.726 SYMLINK libspdk_bdev_error.so 00:02:53.726 SYMLINK libspdk_bdev_aio.so 00:02:53.726 SO libspdk_bdev_delay.so.6.0 00:02:53.726 SO libspdk_bdev_iscsi.so.6.0 00:02:53.983 SYMLINK libspdk_bdev_malloc.so 00:02:53.983 SYMLINK libspdk_bdev_zone_block.so 00:02:53.983 SYMLINK libspdk_bdev_delay.so 00:02:53.983 SYMLINK libspdk_bdev_iscsi.so 00:02:53.983 LIB libspdk_bdev_virtio.a 00:02:53.983 SO libspdk_bdev_virtio.so.6.0 00:02:53.983 LIB libspdk_bdev_lvol.a 00:02:53.983 SO libspdk_bdev_lvol.so.6.0 00:02:53.983 SYMLINK libspdk_bdev_virtio.so 00:02:53.983 SYMLINK libspdk_bdev_lvol.so 00:02:54.549 LIB libspdk_bdev_raid.a 00:02:54.549 SO libspdk_bdev_raid.so.6.0 00:02:54.549 SYMLINK libspdk_bdev_raid.so 00:02:55.484 LIB libspdk_bdev_nvme.a 00:02:55.484 SO libspdk_bdev_nvme.so.7.0 00:02:55.742 SYMLINK libspdk_bdev_nvme.so 00:02:56.000 CC module/event/subsystems/vmd/vmd.o 00:02:56.000 CC module/event/subsystems/iobuf/iobuf.o 00:02:56.000 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:56.000 CC module/event/subsystems/scheduler/scheduler.o 00:02:56.000 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:56.000 CC module/event/subsystems/sock/sock.o 00:02:56.000 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:56.000 CC module/event/subsystems/keyring/keyring.o 00:02:56.000 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:56.259 LIB libspdk_event_keyring.a 00:02:56.259 LIB libspdk_event_vhost_blk.a 00:02:56.259 LIB libspdk_event_sock.a 00:02:56.259 LIB libspdk_event_scheduler.a 00:02:56.259 LIB libspdk_event_vfu_tgt.a 00:02:56.259 LIB libspdk_event_vmd.a 00:02:56.259 SO libspdk_event_keyring.so.1.0 00:02:56.259 SO libspdk_event_vhost_blk.so.3.0 00:02:56.259 SO libspdk_event_sock.so.5.0 00:02:56.259 LIB libspdk_event_iobuf.a 00:02:56.259 SO libspdk_event_scheduler.so.4.0 00:02:56.259 SO libspdk_event_vfu_tgt.so.3.0 00:02:56.259 SO libspdk_event_vmd.so.6.0 00:02:56.259 SO libspdk_event_iobuf.so.3.0 00:02:56.259 SYMLINK libspdk_event_keyring.so 00:02:56.259 SYMLINK libspdk_event_vhost_blk.so 00:02:56.259 SYMLINK libspdk_event_sock.so 00:02:56.259 SYMLINK libspdk_event_vfu_tgt.so 00:02:56.259 SYMLINK libspdk_event_scheduler.so 00:02:56.259 SYMLINK libspdk_event_vmd.so 00:02:56.259 SYMLINK libspdk_event_iobuf.so 00:02:56.516 CC module/event/subsystems/accel/accel.o 00:02:56.516 LIB libspdk_event_accel.a 00:02:56.516 SO libspdk_event_accel.so.6.0 00:02:56.774 SYMLINK libspdk_event_accel.so 00:02:56.774 CC module/event/subsystems/bdev/bdev.o 00:02:57.032 LIB libspdk_event_bdev.a 00:02:57.032 SO libspdk_event_bdev.so.6.0 00:02:57.032 SYMLINK libspdk_event_bdev.so 00:02:57.289 CC module/event/subsystems/ublk/ublk.o 00:02:57.289 CC module/event/subsystems/scsi/scsi.o 00:02:57.289 CC module/event/subsystems/nbd/nbd.o 00:02:57.289 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.289 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.289 LIB libspdk_event_ublk.a 00:02:57.289 LIB libspdk_event_nbd.a 00:02:57.547 LIB libspdk_event_scsi.a 00:02:57.547 SO libspdk_event_ublk.so.3.0 00:02:57.547 SO libspdk_event_nbd.so.6.0 00:02:57.547 SO libspdk_event_scsi.so.6.0 00:02:57.547 SYMLINK libspdk_event_ublk.so 00:02:57.547 SYMLINK libspdk_event_nbd.so 00:02:57.547 SYMLINK libspdk_event_scsi.so 00:02:57.547 LIB libspdk_event_nvmf.a 00:02:57.547 SO libspdk_event_nvmf.so.6.0 00:02:57.547 SYMLINK libspdk_event_nvmf.so 00:02:57.805 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.805 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.805 LIB libspdk_event_vhost_scsi.a 00:02:57.805 LIB libspdk_event_iscsi.a 00:02:57.805 SO libspdk_event_vhost_scsi.so.3.0 00:02:57.805 SO libspdk_event_iscsi.so.6.0 00:02:57.805 SYMLINK libspdk_event_vhost_scsi.so 00:02:57.805 SYMLINK libspdk_event_iscsi.so 00:02:58.065 SO libspdk.so.6.0 00:02:58.065 SYMLINK libspdk.so 00:02:58.327 CXX app/trace/trace.o 00:02:58.327 CC app/trace_record/trace_record.o 00:02:58.327 CC app/spdk_lspci/spdk_lspci.o 00:02:58.327 CC app/spdk_nvme_perf/perf.o 00:02:58.327 CC app/spdk_nvme_identify/identify.o 00:02:58.327 CC app/spdk_top/spdk_top.o 00:02:58.327 CC test/rpc_client/rpc_client_test.o 00:02:58.327 TEST_HEADER include/spdk/accel.h 00:02:58.327 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.327 TEST_HEADER include/spdk/accel_module.h 00:02:58.327 TEST_HEADER include/spdk/assert.h 00:02:58.327 TEST_HEADER include/spdk/barrier.h 00:02:58.327 TEST_HEADER include/spdk/base64.h 00:02:58.327 TEST_HEADER include/spdk/bdev.h 00:02:58.327 TEST_HEADER include/spdk/bdev_module.h 00:02:58.327 TEST_HEADER include/spdk/bdev_zone.h 00:02:58.327 TEST_HEADER include/spdk/bit_array.h 00:02:58.327 TEST_HEADER include/spdk/bit_pool.h 00:02:58.327 TEST_HEADER include/spdk/blob_bdev.h 00:02:58.327 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:58.327 TEST_HEADER include/spdk/blobfs.h 00:02:58.327 TEST_HEADER include/spdk/blob.h 00:02:58.327 TEST_HEADER include/spdk/conf.h 00:02:58.327 TEST_HEADER include/spdk/config.h 00:02:58.327 TEST_HEADER include/spdk/cpuset.h 00:02:58.327 CC app/spdk_dd/spdk_dd.o 00:02:58.327 TEST_HEADER include/spdk/crc16.h 00:02:58.327 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.327 TEST_HEADER include/spdk/crc32.h 00:02:58.327 TEST_HEADER include/spdk/crc64.h 00:02:58.327 TEST_HEADER include/spdk/dif.h 00:02:58.327 CC app/nvmf_tgt/nvmf_main.o 00:02:58.327 TEST_HEADER include/spdk/dma.h 00:02:58.327 TEST_HEADER include/spdk/endian.h 00:02:58.327 TEST_HEADER include/spdk/env_dpdk.h 00:02:58.327 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.327 TEST_HEADER include/spdk/env.h 00:02:58.327 CC app/vhost/vhost.o 00:02:58.327 TEST_HEADER include/spdk/event.h 00:02:58.327 TEST_HEADER include/spdk/fd_group.h 00:02:58.327 TEST_HEADER include/spdk/fd.h 00:02:58.327 TEST_HEADER include/spdk/file.h 00:02:58.327 TEST_HEADER include/spdk/ftl.h 00:02:58.327 TEST_HEADER include/spdk/gpt_spec.h 00:02:58.327 TEST_HEADER include/spdk/hexlify.h 00:02:58.327 TEST_HEADER include/spdk/histogram_data.h 00:02:58.327 TEST_HEADER include/spdk/idxd.h 00:02:58.327 CC app/spdk_tgt/spdk_tgt.o 00:02:58.327 TEST_HEADER include/spdk/idxd_spec.h 00:02:58.327 TEST_HEADER include/spdk/init.h 00:02:58.327 TEST_HEADER include/spdk/ioat.h 00:02:58.327 CC examples/ioat/perf/perf.o 00:02:58.327 TEST_HEADER include/spdk/ioat_spec.h 00:02:58.327 CC examples/nvme/reconnect/reconnect.o 00:02:58.327 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.327 TEST_HEADER include/spdk/iscsi_spec.h 00:02:58.327 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.327 CC examples/idxd/perf/perf.o 00:02:58.327 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:58.327 CC examples/accel/perf/accel_perf.o 00:02:58.327 CC test/env/vtophys/vtophys.o 00:02:58.327 TEST_HEADER include/spdk/json.h 00:02:58.327 CC app/fio/nvme/fio_plugin.o 00:02:58.327 CC examples/nvme/hello_world/hello_world.o 00:02:58.327 TEST_HEADER include/spdk/jsonrpc.h 00:02:58.327 TEST_HEADER include/spdk/keyring.h 00:02:58.327 TEST_HEADER include/spdk/keyring_module.h 00:02:58.327 TEST_HEADER include/spdk/likely.h 00:02:58.327 CC test/nvme/aer/aer.o 00:02:58.327 CC examples/util/zipf/zipf.o 00:02:58.327 CC test/event/event_perf/event_perf.o 00:02:58.327 CC examples/nvme/hotplug/hotplug.o 00:02:58.327 CC test/thread/poller_perf/poller_perf.o 00:02:58.327 CC examples/sock/hello_world/hello_sock.o 00:02:58.327 CC examples/ioat/verify/verify.o 00:02:58.327 TEST_HEADER include/spdk/log.h 00:02:58.327 CC examples/vmd/led/led.o 00:02:58.327 CC examples/nvme/arbitration/arbitration.o 00:02:58.327 TEST_HEADER include/spdk/lvol.h 00:02:58.327 TEST_HEADER include/spdk/memory.h 00:02:58.327 TEST_HEADER include/spdk/mmio.h 00:02:58.327 TEST_HEADER include/spdk/nbd.h 00:02:58.589 TEST_HEADER include/spdk/notify.h 00:02:58.589 TEST_HEADER include/spdk/nvme.h 00:02:58.589 TEST_HEADER include/spdk/nvme_intel.h 00:02:58.589 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:58.589 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:58.589 CC examples/thread/thread/thread_ex.o 00:02:58.589 TEST_HEADER include/spdk/nvme_spec.h 00:02:58.589 CC test/blobfs/mkfs/mkfs.o 00:02:58.589 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.589 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.589 CC examples/blob/cli/blobcli.o 00:02:58.589 CC test/dma/test_dma/test_dma.o 00:02:58.589 CC test/accel/dif/dif.o 00:02:58.589 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.589 CC examples/blob/hello_world/hello_blob.o 00:02:58.589 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.589 CC test/app/bdev_svc/bdev_svc.o 00:02:58.589 TEST_HEADER include/spdk/nvmf.h 00:02:58.589 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.589 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.589 CC test/bdev/bdevio/bdevio.o 00:02:58.589 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.589 TEST_HEADER include/spdk/opal.h 00:02:58.589 TEST_HEADER include/spdk/opal_spec.h 00:02:58.589 CC app/fio/bdev/fio_plugin.o 00:02:58.589 CC examples/nvmf/nvmf/nvmf.o 00:02:58.589 TEST_HEADER include/spdk/pci_ids.h 00:02:58.589 TEST_HEADER include/spdk/pipe.h 00:02:58.589 TEST_HEADER include/spdk/queue.h 00:02:58.589 TEST_HEADER include/spdk/reduce.h 00:02:58.589 TEST_HEADER include/spdk/rpc.h 00:02:58.589 TEST_HEADER include/spdk/scheduler.h 00:02:58.589 TEST_HEADER include/spdk/scsi.h 00:02:58.589 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.589 TEST_HEADER include/spdk/sock.h 00:02:58.589 TEST_HEADER include/spdk/stdinc.h 00:02:58.589 TEST_HEADER include/spdk/string.h 00:02:58.589 TEST_HEADER include/spdk/thread.h 00:02:58.589 TEST_HEADER include/spdk/trace.h 00:02:58.589 TEST_HEADER include/spdk/trace_parser.h 00:02:58.589 LINK spdk_lspci 00:02:58.589 TEST_HEADER include/spdk/tree.h 00:02:58.589 TEST_HEADER include/spdk/ublk.h 00:02:58.589 TEST_HEADER include/spdk/util.h 00:02:58.589 TEST_HEADER include/spdk/uuid.h 00:02:58.589 CC test/env/mem_callbacks/mem_callbacks.o 00:02:58.589 TEST_HEADER include/spdk/version.h 00:02:58.589 CC test/lvol/esnap/esnap.o 00:02:58.589 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.589 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.589 TEST_HEADER include/spdk/vhost.h 00:02:58.589 TEST_HEADER include/spdk/vmd.h 00:02:58.589 TEST_HEADER include/spdk/xor.h 00:02:58.589 TEST_HEADER include/spdk/zipf.h 00:02:58.589 CXX test/cpp_headers/accel.o 00:02:58.589 LINK rpc_client_test 00:02:58.589 LINK spdk_nvme_discover 00:02:58.589 LINK lsvmd 00:02:58.854 LINK interrupt_tgt 00:02:58.854 LINK nvmf_tgt 00:02:58.854 LINK vtophys 00:02:58.854 LINK led 00:02:58.854 LINK event_perf 00:02:58.854 LINK zipf 00:02:58.854 LINK vhost 00:02:58.854 LINK poller_perf 00:02:58.854 LINK spdk_trace_record 00:02:58.854 LINK cmb_copy 00:02:58.854 LINK iscsi_tgt 00:02:58.854 LINK spdk_tgt 00:02:58.854 LINK ioat_perf 00:02:58.854 LINK hello_world 00:02:58.854 LINK verify 00:02:58.854 LINK bdev_svc 00:02:58.854 LINK mkfs 00:02:58.854 LINK hotplug 00:02:58.854 CXX test/cpp_headers/accel_module.o 00:02:58.854 LINK hello_sock 00:02:58.854 LINK mem_callbacks 00:02:59.121 LINK thread 00:02:59.121 LINK hello_bdev 00:02:59.121 LINK hello_blob 00:02:59.121 LINK aer 00:02:59.121 LINK spdk_dd 00:02:59.121 LINK idxd_perf 00:02:59.121 LINK arbitration 00:02:59.121 LINK spdk_trace 00:02:59.121 LINK reconnect 00:02:59.121 LINK nvmf 00:02:59.121 CC test/event/reactor/reactor.o 00:02:59.388 LINK test_dma 00:02:59.388 CC test/event/reactor_perf/reactor_perf.o 00:02:59.388 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:59.388 LINK dif 00:02:59.388 LINK bdevio 00:02:59.388 CC test/nvme/reset/reset.o 00:02:59.388 CXX test/cpp_headers/assert.o 00:02:59.388 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:59.388 CC examples/nvme/abort/abort.o 00:02:59.388 CXX test/cpp_headers/barrier.o 00:02:59.388 LINK accel_perf 00:02:59.388 CC test/env/memory/memory_ut.o 00:02:59.388 LINK nvme_manage 00:02:59.388 CC test/app/histogram_perf/histogram_perf.o 00:02:59.388 CC test/env/pci/pci_ut.o 00:02:59.388 CC test/nvme/sgl/sgl.o 00:02:59.388 CC test/nvme/e2edp/nvme_dp.o 00:02:59.388 CXX test/cpp_headers/base64.o 00:02:59.388 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.388 CXX test/cpp_headers/bdev.o 00:02:59.388 CC test/app/jsoncat/jsoncat.o 00:02:59.388 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.388 LINK reactor 00:02:59.388 CC test/event/app_repeat/app_repeat.o 00:02:59.388 LINK blobcli 00:02:59.651 CXX test/cpp_headers/bdev_module.o 00:02:59.651 CC test/nvme/overhead/overhead.o 00:02:59.651 CC test/app/stub/stub.o 00:02:59.651 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.651 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:59.651 CXX test/cpp_headers/bdev_zone.o 00:02:59.651 CC test/nvme/err_injection/err_injection.o 00:02:59.651 CXX test/cpp_headers/bit_array.o 00:02:59.651 CXX test/cpp_headers/bit_pool.o 00:02:59.651 LINK spdk_bdev 00:02:59.651 LINK spdk_nvme 00:02:59.651 CC test/event/scheduler/scheduler.o 00:02:59.651 LINK reactor_perf 00:02:59.651 CC test/nvme/startup/startup.o 00:02:59.651 LINK env_dpdk_post_init 00:02:59.651 CXX test/cpp_headers/blob_bdev.o 00:02:59.651 LINK histogram_perf 00:02:59.651 CC test/nvme/reserve/reserve.o 00:02:59.651 CC test/nvme/simple_copy/simple_copy.o 00:02:59.651 CXX test/cpp_headers/blobfs_bdev.o 00:02:59.651 LINK jsoncat 00:02:59.914 CXX test/cpp_headers/blobfs.o 00:02:59.914 CXX test/cpp_headers/blob.o 00:02:59.914 CC test/nvme/connect_stress/connect_stress.o 00:02:59.914 CXX test/cpp_headers/conf.o 00:02:59.914 CXX test/cpp_headers/config.o 00:02:59.914 CC test/nvme/boot_partition/boot_partition.o 00:02:59.914 CXX test/cpp_headers/cpuset.o 00:02:59.914 CXX test/cpp_headers/crc16.o 00:02:59.914 CC test/nvme/compliance/nvme_compliance.o 00:02:59.914 LINK app_repeat 00:02:59.914 CXX test/cpp_headers/crc32.o 00:02:59.914 LINK reset 00:02:59.914 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.914 CXX test/cpp_headers/crc64.o 00:02:59.914 LINK spdk_nvme_perf 00:02:59.914 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.914 CC test/nvme/fdp/fdp.o 00:02:59.914 CC test/nvme/cuse/cuse.o 00:02:59.914 CXX test/cpp_headers/dif.o 00:02:59.914 LINK stub 00:02:59.914 LINK pmr_persistence 00:02:59.914 CXX test/cpp_headers/dma.o 00:02:59.914 CXX test/cpp_headers/endian.o 00:02:59.914 CXX test/cpp_headers/env_dpdk.o 00:02:59.914 LINK err_injection 00:02:59.914 LINK sgl 00:02:59.914 CXX test/cpp_headers/env.o 00:02:59.914 LINK startup 00:02:59.914 CXX test/cpp_headers/event.o 00:02:59.914 CXX test/cpp_headers/fd_group.o 00:02:59.914 LINK nvme_dp 00:02:59.914 CXX test/cpp_headers/fd.o 00:03:00.175 LINK spdk_nvme_identify 00:03:00.175 LINK abort 00:03:00.175 LINK scheduler 00:03:00.175 CXX test/cpp_headers/file.o 00:03:00.175 CXX test/cpp_headers/ftl.o 00:03:00.175 LINK overhead 00:03:00.175 LINK spdk_top 00:03:00.175 LINK nvme_fuzz 00:03:00.175 LINK pci_ut 00:03:00.175 CXX test/cpp_headers/gpt_spec.o 00:03:00.175 LINK bdevperf 00:03:00.175 CXX test/cpp_headers/hexlify.o 00:03:00.175 CXX test/cpp_headers/histogram_data.o 00:03:00.175 LINK reserve 00:03:00.175 CXX test/cpp_headers/idxd.o 00:03:00.175 LINK boot_partition 00:03:00.175 CXX test/cpp_headers/idxd_spec.o 00:03:00.175 LINK connect_stress 00:03:00.175 CXX test/cpp_headers/init.o 00:03:00.175 CXX test/cpp_headers/ioat.o 00:03:00.175 CXX test/cpp_headers/ioat_spec.o 00:03:00.175 CXX test/cpp_headers/iscsi_spec.o 00:03:00.175 LINK simple_copy 00:03:00.175 CXX test/cpp_headers/json.o 00:03:00.175 CXX test/cpp_headers/jsonrpc.o 00:03:00.175 CXX test/cpp_headers/keyring.o 00:03:00.444 LINK memory_ut 00:03:00.444 LINK fused_ordering 00:03:00.444 CXX test/cpp_headers/keyring_module.o 00:03:00.444 LINK vhost_fuzz 00:03:00.444 LINK doorbell_aers 00:03:00.444 CXX test/cpp_headers/likely.o 00:03:00.444 CXX test/cpp_headers/log.o 00:03:00.444 CXX test/cpp_headers/lvol.o 00:03:00.444 CXX test/cpp_headers/memory.o 00:03:00.444 CXX test/cpp_headers/mmio.o 00:03:00.444 CXX test/cpp_headers/nbd.o 00:03:00.444 CXX test/cpp_headers/notify.o 00:03:00.444 CXX test/cpp_headers/nvme.o 00:03:00.444 CXX test/cpp_headers/nvme_intel.o 00:03:00.444 CXX test/cpp_headers/nvme_ocssd.o 00:03:00.444 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:00.444 CXX test/cpp_headers/nvme_spec.o 00:03:00.444 CXX test/cpp_headers/nvme_zns.o 00:03:00.444 CXX test/cpp_headers/nvmf_cmd.o 00:03:00.444 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:00.444 CXX test/cpp_headers/nvmf.o 00:03:00.444 LINK nvme_compliance 00:03:00.444 CXX test/cpp_headers/nvmf_spec.o 00:03:00.444 CXX test/cpp_headers/nvmf_transport.o 00:03:00.444 CXX test/cpp_headers/opal.o 00:03:00.444 CXX test/cpp_headers/opal_spec.o 00:03:00.444 CXX test/cpp_headers/pci_ids.o 00:03:00.444 CXX test/cpp_headers/pipe.o 00:03:00.444 CXX test/cpp_headers/queue.o 00:03:00.705 CXX test/cpp_headers/reduce.o 00:03:00.705 CXX test/cpp_headers/rpc.o 00:03:00.705 LINK fdp 00:03:00.705 CXX test/cpp_headers/scheduler.o 00:03:00.705 CXX test/cpp_headers/scsi.o 00:03:00.705 CXX test/cpp_headers/scsi_spec.o 00:03:00.705 CXX test/cpp_headers/sock.o 00:03:00.705 CXX test/cpp_headers/stdinc.o 00:03:00.705 CXX test/cpp_headers/string.o 00:03:00.705 CXX test/cpp_headers/thread.o 00:03:00.705 CXX test/cpp_headers/trace.o 00:03:00.705 CXX test/cpp_headers/trace_parser.o 00:03:00.705 CXX test/cpp_headers/tree.o 00:03:00.705 CXX test/cpp_headers/ublk.o 00:03:00.705 CXX test/cpp_headers/util.o 00:03:00.705 CXX test/cpp_headers/uuid.o 00:03:00.705 CXX test/cpp_headers/version.o 00:03:00.705 CXX test/cpp_headers/vfio_user_pci.o 00:03:00.705 CXX test/cpp_headers/vfio_user_spec.o 00:03:00.705 CXX test/cpp_headers/vhost.o 00:03:00.705 CXX test/cpp_headers/vmd.o 00:03:00.705 CXX test/cpp_headers/xor.o 00:03:00.705 CXX test/cpp_headers/zipf.o 00:03:01.638 LINK cuse 00:03:01.896 LINK iscsi_fuzz 00:03:04.432 LINK esnap 00:03:04.432 00:03:04.432 real 0m40.339s 00:03:04.432 user 7m38.081s 00:03:04.432 sys 1m52.377s 00:03:04.432 01:31:28 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:03:04.432 01:31:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:04.432 ************************************ 00:03:04.432 END TEST make 00:03:04.432 ************************************ 00:03:04.432 01:31:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:04.432 01:31:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:04.432 01:31:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:04.432 01:31:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:04.432 01:31:28 -- pm/common@44 -- $ pid=3802804 00:03:04.432 01:31:28 -- pm/common@50 -- $ kill -TERM 3802804 00:03:04.432 01:31:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:04.432 01:31:28 -- pm/common@44 -- $ pid=3802806 00:03:04.432 01:31:28 -- pm/common@50 -- $ kill -TERM 3802806 00:03:04.432 01:31:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:04.432 01:31:28 -- pm/common@44 -- $ pid=3802808 00:03:04.432 01:31:28 -- pm/common@50 -- $ kill -TERM 3802808 00:03:04.432 01:31:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:04.432 01:31:28 -- pm/common@44 -- $ pid=3802843 00:03:04.432 01:31:28 -- pm/common@50 -- $ sudo -E kill -TERM 3802843 00:03:04.432 01:31:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:04.432 01:31:28 -- nvmf/common.sh@7 -- # uname -s 00:03:04.432 01:31:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:04.432 01:31:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:04.432 01:31:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:04.432 01:31:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:04.432 01:31:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:04.432 01:31:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:04.432 01:31:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:04.432 01:31:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:04.432 01:31:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:04.432 01:31:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:04.432 01:31:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:04.432 01:31:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:04.432 01:31:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:04.432 01:31:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:04.432 01:31:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:04.432 01:31:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:04.432 01:31:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:04.432 01:31:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:04.432 01:31:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:04.432 01:31:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:04.432 01:31:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.432 01:31:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.432 01:31:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.432 01:31:28 -- paths/export.sh@5 -- # export PATH 00:03:04.432 01:31:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:04.432 01:31:28 -- nvmf/common.sh@47 -- # : 0 00:03:04.432 01:31:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:04.432 01:31:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:04.432 01:31:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:04.432 01:31:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:04.432 01:31:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:04.432 01:31:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:04.432 01:31:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:04.432 01:31:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:04.432 01:31:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:04.432 01:31:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:04.432 01:31:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:04.432 01:31:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:04.432 01:31:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:04.432 01:31:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:04.432 01:31:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:04.432 01:31:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:04.432 01:31:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:04.432 01:31:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:04.432 01:31:28 -- spdk/autotest.sh@48 -- # udevadm_pid=3878139 00:03:04.432 01:31:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:04.432 01:31:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:04.432 01:31:28 -- pm/common@17 -- # local monitor 00:03:04.432 01:31:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@21 -- # date +%s 00:03:04.432 01:31:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:04.432 01:31:28 -- pm/common@21 -- # date +%s 00:03:04.432 01:31:28 -- pm/common@25 -- # sleep 1 00:03:04.432 01:31:28 -- pm/common@21 -- # date +%s 00:03:04.432 01:31:28 -- pm/common@21 -- # date +%s 00:03:04.432 01:31:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715729488 00:03:04.432 01:31:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715729488 00:03:04.432 01:31:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715729488 00:03:04.432 01:31:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715729488 00:03:04.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715729488_collect-vmstat.pm.log 00:03:04.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715729488_collect-cpu-load.pm.log 00:03:04.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715729488_collect-cpu-temp.pm.log 00:03:04.690 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715729488_collect-bmc-pm.bmc.pm.log 00:03:05.628 01:31:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:05.628 01:31:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:05.628 01:31:29 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:05.628 01:31:29 -- common/autotest_common.sh@10 -- # set +x 00:03:05.628 01:31:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:05.628 01:31:29 -- common/autotest_common.sh@745 -- # xtrace_disable 00:03:05.628 01:31:29 -- common/autotest_common.sh@10 -- # set +x 00:03:05.628 01:31:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:05.628 01:31:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.628 01:31:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.628 01:31:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:05.628 01:31:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.628 01:31:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:05.628 01:31:29 -- common/autotest_common.sh@1452 -- # uname 00:03:05.628 01:31:29 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:03:05.628 01:31:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:05.628 01:31:29 -- common/autotest_common.sh@1472 -- # uname 00:03:05.628 01:31:29 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:03:05.628 01:31:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:05.628 01:31:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:05.628 01:31:29 -- spdk/autotest.sh@72 -- # hash lcov 00:03:05.628 01:31:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:05.628 01:31:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:05.628 --rc lcov_branch_coverage=1 00:03:05.628 --rc lcov_function_coverage=1 00:03:05.628 --rc genhtml_branch_coverage=1 00:03:05.628 --rc genhtml_function_coverage=1 00:03:05.628 --rc genhtml_legend=1 00:03:05.628 --rc geninfo_all_blocks=1 00:03:05.628 ' 00:03:05.628 01:31:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:05.628 --rc lcov_branch_coverage=1 00:03:05.628 --rc lcov_function_coverage=1 00:03:05.628 --rc genhtml_branch_coverage=1 00:03:05.628 --rc genhtml_function_coverage=1 00:03:05.628 --rc genhtml_legend=1 00:03:05.628 --rc geninfo_all_blocks=1 00:03:05.628 ' 00:03:05.628 01:31:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:05.628 --rc lcov_branch_coverage=1 00:03:05.628 --rc lcov_function_coverage=1 00:03:05.628 --rc genhtml_branch_coverage=1 00:03:05.628 --rc genhtml_function_coverage=1 00:03:05.628 --rc genhtml_legend=1 00:03:05.628 --rc geninfo_all_blocks=1 00:03:05.628 --no-external' 00:03:05.628 01:31:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:05.628 --rc lcov_branch_coverage=1 00:03:05.628 --rc lcov_function_coverage=1 00:03:05.628 --rc genhtml_branch_coverage=1 00:03:05.628 --rc genhtml_function_coverage=1 00:03:05.628 --rc genhtml_legend=1 00:03:05.628 --rc geninfo_all_blocks=1 00:03:05.628 --no-external' 00:03:05.628 01:31:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:05.628 lcov: LCOV version 1.14 00:03:05.628 01:31:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:17.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:17.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:19.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:19.250 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:19.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:19.250 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:19.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:19.250 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:37.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:37.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:37.327 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:37.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:37.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:37.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:37.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:37.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:37.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:37.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:37.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:37.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:37.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:37.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:37.587 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:37.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:37.587 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:37.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:37.587 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:37.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:37.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:37.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:37.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:37.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:37.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:37.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:37.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:37.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:37.844 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:40.374 01:32:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:40.374 01:32:04 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:40.374 01:32:04 -- common/autotest_common.sh@10 -- # set +x 00:03:40.374 01:32:04 -- spdk/autotest.sh@91 -- # rm -f 00:03:40.374 01:32:04 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.747 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:41.747 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:41.747 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:41.747 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:41.748 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:41.748 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:41.748 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:41.748 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:41.748 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:41.748 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:41.748 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:41.748 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:41.748 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:41.748 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:41.748 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:41.748 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:41.748 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:41.748 01:32:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:41.748 01:32:05 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:41.748 01:32:05 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:41.748 01:32:05 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:41.748 01:32:05 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:41.748 01:32:05 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:41.748 01:32:05 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:41.748 01:32:05 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.748 01:32:05 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:41.748 01:32:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:41.748 01:32:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.748 01:32:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.748 01:32:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:41.748 01:32:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:41.748 01:32:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:42.006 No valid GPT data, bailing 00:03:42.006 01:32:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.006 01:32:05 -- scripts/common.sh@391 -- # pt= 00:03:42.006 01:32:05 -- scripts/common.sh@392 -- # return 1 00:03:42.006 01:32:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:42.006 1+0 records in 00:03:42.006 1+0 records out 00:03:42.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00217963 s, 481 MB/s 00:03:42.006 01:32:05 -- spdk/autotest.sh@118 -- # sync 00:03:42.006 01:32:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:42.006 01:32:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:42.006 01:32:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.904 01:32:07 -- spdk/autotest.sh@124 -- # uname -s 00:03:43.904 01:32:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:43.904 01:32:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.905 01:32:07 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:43.905 01:32:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:43.905 01:32:07 -- common/autotest_common.sh@10 -- # set +x 00:03:43.905 ************************************ 00:03:43.905 START TEST setup.sh 00:03:43.905 ************************************ 00:03:43.905 01:32:07 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:43.905 * Looking for test storage... 00:03:43.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.905 01:32:07 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:43.905 01:32:07 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:43.905 01:32:07 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.905 01:32:07 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:43.905 01:32:07 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:43.905 01:32:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.905 ************************************ 00:03:43.905 START TEST acl 00:03:43.905 ************************************ 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:43.905 * Looking for test storage... 00:03:43.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.905 01:32:07 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.905 01:32:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:43.905 01:32:07 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:43.905 01:32:07 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:43.905 01:32:07 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:43.905 01:32:07 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:43.905 01:32:07 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:43.905 01:32:07 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.905 01:32:07 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.804 01:32:09 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:45.804 01:32:09 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:45.804 01:32:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:45.804 01:32:09 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:45.804 01:32:09 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.804 01:32:09 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:46.738 Hugepages 00:03:46.738 node hugesize free / total 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 00:03:46.738 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.738 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:46.996 01:32:10 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:46.996 01:32:10 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:46.996 01:32:10 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:46.996 01:32:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.996 ************************************ 00:03:46.996 START TEST denied 00:03:46.996 ************************************ 00:03:46.996 01:32:10 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:03:46.996 01:32:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:46.996 01:32:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:46.996 01:32:10 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:46.996 01:32:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.996 01:32:10 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.431 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.431 01:32:12 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.957 00:03:50.957 real 0m3.935s 00:03:50.957 user 0m1.186s 00:03:50.957 sys 0m1.931s 00:03:50.957 01:32:14 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:50.957 01:32:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:50.957 ************************************ 00:03:50.957 END TEST denied 00:03:50.957 ************************************ 00:03:50.957 01:32:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:50.957 01:32:14 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:50.957 01:32:14 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:50.957 01:32:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.957 ************************************ 00:03:50.957 START TEST allowed 00:03:50.957 ************************************ 00:03:50.957 01:32:14 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:03:50.957 01:32:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:50.957 01:32:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:50.957 01:32:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:50.957 01:32:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.957 01:32:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.485 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.485 01:32:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:53.485 01:32:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:53.485 01:32:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:53.485 01:32:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.485 01:32:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.857 00:03:54.857 real 0m3.994s 00:03:54.857 user 0m1.122s 00:03:54.857 sys 0m1.838s 00:03:54.857 01:32:18 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:54.857 01:32:18 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:54.857 ************************************ 00:03:54.857 END TEST allowed 00:03:54.857 ************************************ 00:03:54.857 00:03:54.857 real 0m11.171s 00:03:54.857 user 0m3.557s 00:03:54.857 sys 0m5.853s 00:03:54.857 01:32:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:54.858 01:32:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.858 ************************************ 00:03:54.858 END TEST acl 00:03:54.858 ************************************ 00:03:54.858 01:32:18 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:54.858 01:32:18 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:54.858 01:32:18 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:54.858 01:32:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.117 ************************************ 00:03:55.117 START TEST hugepages 00:03:55.117 ************************************ 00:03:55.117 01:32:18 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.117 * Looking for test storage... 00:03:55.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37406100 kB' 'MemAvailable: 41129600 kB' 'Buffers: 2696 kB' 'Cached: 16643408 kB' 'SwapCached: 0 kB' 'Active: 13563512 kB' 'Inactive: 3500732 kB' 'Active(anon): 12951944 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 421396 kB' 'Mapped: 183852 kB' 'Shmem: 12533804 kB' 'KReclaimable: 210076 kB' 'Slab: 576072 kB' 'SReclaimable: 210076 kB' 'SUnreclaim: 365996 kB' 'KernelStack: 12912 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14090496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198316 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.117 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.118 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.119 01:32:18 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:55.119 01:32:18 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:55.119 01:32:18 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:55.119 01:32:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.119 ************************************ 00:03:55.119 START TEST default_setup 00:03:55.119 ************************************ 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.119 01:32:18 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.491 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.491 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:56.491 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.428 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39509148 kB' 'MemAvailable: 43232800 kB' 'Buffers: 2696 kB' 'Cached: 16643500 kB' 'SwapCached: 0 kB' 'Active: 13582672 kB' 'Inactive: 3500732 kB' 'Active(anon): 12971104 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440452 kB' 'Mapped: 184396 kB' 'Shmem: 12533896 kB' 'KReclaimable: 210380 kB' 'Slab: 575588 kB' 'SReclaimable: 210380 kB' 'SUnreclaim: 365208 kB' 'KernelStack: 12800 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14113964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.428 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.429 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39511728 kB' 'MemAvailable: 43235380 kB' 'Buffers: 2696 kB' 'Cached: 16643500 kB' 'SwapCached: 0 kB' 'Active: 13583248 kB' 'Inactive: 3500732 kB' 'Active(anon): 12971680 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 441056 kB' 'Mapped: 184392 kB' 'Shmem: 12533896 kB' 'KReclaimable: 210380 kB' 'Slab: 575524 kB' 'SReclaimable: 210380 kB' 'SUnreclaim: 365144 kB' 'KernelStack: 12864 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14113980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198444 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.693 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.694 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39510584 kB' 'MemAvailable: 43234236 kB' 'Buffers: 2696 kB' 'Cached: 16643520 kB' 'SwapCached: 0 kB' 'Active: 13582408 kB' 'Inactive: 3500732 kB' 'Active(anon): 12970840 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440224 kB' 'Mapped: 184364 kB' 'Shmem: 12533916 kB' 'KReclaimable: 210380 kB' 'Slab: 575508 kB' 'SReclaimable: 210380 kB' 'SUnreclaim: 365128 kB' 'KernelStack: 12864 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14114004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.695 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.696 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.697 nr_hugepages=1024 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.697 resv_hugepages=0 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.697 surplus_hugepages=0 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.697 anon_hugepages=0 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39510584 kB' 'MemAvailable: 43234236 kB' 'Buffers: 2696 kB' 'Cached: 16643536 kB' 'SwapCached: 0 kB' 'Active: 13582168 kB' 'Inactive: 3500732 kB' 'Active(anon): 12970600 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 439484 kB' 'Mapped: 184364 kB' 'Shmem: 12533932 kB' 'KReclaimable: 210380 kB' 'Slab: 575508 kB' 'SReclaimable: 210380 kB' 'SUnreclaim: 365128 kB' 'KernelStack: 12832 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14114024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198428 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.697 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.698 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22633180 kB' 'MemUsed: 10243760 kB' 'SwapCached: 0 kB' 'Active: 6930696 kB' 'Inactive: 154584 kB' 'Active(anon): 6599536 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785300 kB' 'Mapped: 64244 kB' 'AnonPages: 303132 kB' 'Shmem: 6299556 kB' 'KernelStack: 7880 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98504 kB' 'Slab: 267244 kB' 'SReclaimable: 98504 kB' 'SUnreclaim: 168740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.699 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.700 node0=1024 expecting 1024 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.700 00:03:57.700 real 0m2.521s 00:03:57.700 user 0m0.639s 00:03:57.700 sys 0m0.876s 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:57.700 01:32:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:57.700 ************************************ 00:03:57.700 END TEST default_setup 00:03:57.700 ************************************ 00:03:57.700 01:32:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:57.700 01:32:21 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:57.700 01:32:21 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:57.700 01:32:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.700 ************************************ 00:03:57.700 START TEST per_node_1G_alloc 00:03:57.700 ************************************ 00:03:57.700 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:03:57.700 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.701 01:32:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.075 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.075 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.075 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.075 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.075 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.075 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.075 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.075 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.075 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.075 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.075 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.075 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.075 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.075 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.075 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.075 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.075 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.340 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:59.340 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:59.340 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.340 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.340 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.340 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39513288 kB' 'MemAvailable: 43236932 kB' 'Buffers: 2696 kB' 'Cached: 16643608 kB' 'SwapCached: 0 kB' 'Active: 13583332 kB' 'Inactive: 3500732 kB' 'Active(anon): 12971764 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440984 kB' 'Mapped: 183944 kB' 'Shmem: 12534004 kB' 'KReclaimable: 210364 kB' 'Slab: 575788 kB' 'SReclaimable: 210364 kB' 'SUnreclaim: 365424 kB' 'KernelStack: 12880 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14114212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198412 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.341 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39516952 kB' 'MemAvailable: 43240596 kB' 'Buffers: 2696 kB' 'Cached: 16643608 kB' 'SwapCached: 0 kB' 'Active: 13583396 kB' 'Inactive: 3500732 kB' 'Active(anon): 12971828 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 441048 kB' 'Mapped: 183952 kB' 'Shmem: 12534004 kB' 'KReclaimable: 210364 kB' 'Slab: 575772 kB' 'SReclaimable: 210364 kB' 'SUnreclaim: 365408 kB' 'KernelStack: 12864 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14114228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.342 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.343 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39517316 kB' 'MemAvailable: 43240960 kB' 'Buffers: 2696 kB' 'Cached: 16643632 kB' 'SwapCached: 0 kB' 'Active: 13582972 kB' 'Inactive: 3500732 kB' 'Active(anon): 12971404 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440620 kB' 'Mapped: 183916 kB' 'Shmem: 12534028 kB' 'KReclaimable: 210364 kB' 'Slab: 575836 kB' 'SReclaimable: 210364 kB' 'SUnreclaim: 365472 kB' 'KernelStack: 12880 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14114252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.344 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.345 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.346 nr_hugepages=1024 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.346 resv_hugepages=0 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.346 surplus_hugepages=0 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.346 anon_hugepages=0 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39516560 kB' 'MemAvailable: 43240204 kB' 'Buffers: 2696 kB' 'Cached: 16643656 kB' 'SwapCached: 0 kB' 'Active: 13583000 kB' 'Inactive: 3500732 kB' 'Active(anon): 12971432 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440632 kB' 'Mapped: 183916 kB' 'Shmem: 12534052 kB' 'KReclaimable: 210364 kB' 'Slab: 575836 kB' 'SReclaimable: 210364 kB' 'SUnreclaim: 365472 kB' 'KernelStack: 12880 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14114276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198396 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.346 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.347 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 23685804 kB' 'MemUsed: 9191136 kB' 'SwapCached: 0 kB' 'Active: 6930888 kB' 'Inactive: 154584 kB' 'Active(anon): 6599728 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785300 kB' 'Mapped: 64284 kB' 'AnonPages: 303336 kB' 'Shmem: 6299556 kB' 'KernelStack: 7896 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98504 kB' 'Slab: 267368 kB' 'SReclaimable: 98504 kB' 'SUnreclaim: 168864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.348 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.349 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15830756 kB' 'MemUsed: 11834032 kB' 'SwapCached: 0 kB' 'Active: 6652432 kB' 'Inactive: 3346148 kB' 'Active(anon): 6372024 kB' 'Inactive(anon): 0 kB' 'Active(file): 280408 kB' 'Inactive(file): 3346148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9861092 kB' 'Mapped: 119632 kB' 'AnonPages: 137572 kB' 'Shmem: 6234536 kB' 'KernelStack: 4968 kB' 'PageTables: 3104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111860 kB' 'Slab: 308468 kB' 'SReclaimable: 111860 kB' 'SUnreclaim: 196608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.350 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.351 node0=512 expecting 512 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.351 node1=512 expecting 512 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.351 00:03:59.351 real 0m1.641s 00:03:59.351 user 0m0.634s 00:03:59.351 sys 0m0.970s 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.351 01:32:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.351 ************************************ 00:03:59.351 END TEST per_node_1G_alloc 00:03:59.351 ************************************ 00:03:59.351 01:32:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:59.351 01:32:23 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.351 01:32:23 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.351 01:32:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.351 ************************************ 00:03:59.351 START TEST even_2G_alloc 00:03:59.351 ************************************ 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.351 01:32:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.725 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.725 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.725 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.725 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.725 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.725 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.725 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.725 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.725 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:00.725 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.725 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:00.725 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:00.725 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:00.725 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:00.725 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:00.725 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:00.725 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39522260 kB' 'MemAvailable: 43245880 kB' 'Buffers: 2696 kB' 'Cached: 16643752 kB' 'SwapCached: 0 kB' 'Active: 13577432 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965864 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 434908 kB' 'Mapped: 182988 kB' 'Shmem: 12534148 kB' 'KReclaimable: 210316 kB' 'Slab: 575768 kB' 'SReclaimable: 210316 kB' 'SUnreclaim: 365452 kB' 'KernelStack: 12944 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14087568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198460 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.991 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.992 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39521692 kB' 'MemAvailable: 43245312 kB' 'Buffers: 2696 kB' 'Cached: 16643752 kB' 'SwapCached: 0 kB' 'Active: 13577708 kB' 'Inactive: 3500732 kB' 'Active(anon): 12966140 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 435552 kB' 'Mapped: 182944 kB' 'Shmem: 12534148 kB' 'KReclaimable: 210316 kB' 'Slab: 575768 kB' 'SReclaimable: 210316 kB' 'SUnreclaim: 365452 kB' 'KernelStack: 13104 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14087584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198508 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.993 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.994 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39522416 kB' 'MemAvailable: 43246036 kB' 'Buffers: 2696 kB' 'Cached: 16643772 kB' 'SwapCached: 0 kB' 'Active: 13578312 kB' 'Inactive: 3500732 kB' 'Active(anon): 12966744 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 435768 kB' 'Mapped: 182856 kB' 'Shmem: 12534168 kB' 'KReclaimable: 210316 kB' 'Slab: 575716 kB' 'SReclaimable: 210316 kB' 'SUnreclaim: 365400 kB' 'KernelStack: 13072 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14087360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198572 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.995 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.996 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.997 nr_hugepages=1024 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.997 resv_hugepages=0 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.997 surplus_hugepages=0 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.997 anon_hugepages=0 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39522924 kB' 'MemAvailable: 43246544 kB' 'Buffers: 2696 kB' 'Cached: 16643792 kB' 'SwapCached: 0 kB' 'Active: 13576628 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965060 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 434032 kB' 'Mapped: 182848 kB' 'Shmem: 12534188 kB' 'KReclaimable: 210316 kB' 'Slab: 575748 kB' 'SReclaimable: 210316 kB' 'SUnreclaim: 365432 kB' 'KernelStack: 12864 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14085268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198364 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.997 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.998 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 23679200 kB' 'MemUsed: 9197740 kB' 'SwapCached: 0 kB' 'Active: 6929052 kB' 'Inactive: 154584 kB' 'Active(anon): 6597892 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785312 kB' 'Mapped: 63304 kB' 'AnonPages: 301504 kB' 'Shmem: 6299568 kB' 'KernelStack: 7912 kB' 'PageTables: 5004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98496 kB' 'Slab: 267408 kB' 'SReclaimable: 98496 kB' 'SUnreclaim: 168912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.999 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.000 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15843328 kB' 'MemUsed: 11821460 kB' 'SwapCached: 0 kB' 'Active: 6647132 kB' 'Inactive: 3346148 kB' 'Active(anon): 6366724 kB' 'Inactive(anon): 0 kB' 'Active(file): 280408 kB' 'Inactive(file): 3346148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9861216 kB' 'Mapped: 119532 kB' 'AnonPages: 132104 kB' 'Shmem: 6234660 kB' 'KernelStack: 4824 kB' 'PageTables: 2524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111820 kB' 'Slab: 308340 kB' 'SReclaimable: 111820 kB' 'SUnreclaim: 196520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.001 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.002 node0=512 expecting 512 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:01.002 node1=512 expecting 512 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.002 00:04:01.002 real 0m1.605s 00:04:01.002 user 0m0.682s 00:04:01.002 sys 0m0.890s 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:01.002 01:32:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 ************************************ 00:04:01.002 END TEST even_2G_alloc 00:04:01.002 ************************************ 00:04:01.002 01:32:24 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:01.002 01:32:24 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:01.002 01:32:24 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:01.002 01:32:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 ************************************ 00:04:01.002 START TEST odd_alloc 00:04:01.002 ************************************ 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.002 01:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.381 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.381 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.381 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.381 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.381 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.381 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.381 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.381 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.381 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.381 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.381 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.381 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.381 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.381 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.381 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.381 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.381 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39521612 kB' 'MemAvailable: 43245232 kB' 'Buffers: 2696 kB' 'Cached: 16643888 kB' 'SwapCached: 0 kB' 'Active: 13576296 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964728 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433704 kB' 'Mapped: 182884 kB' 'Shmem: 12534284 kB' 'KReclaimable: 210316 kB' 'Slab: 575656 kB' 'SReclaimable: 210316 kB' 'SUnreclaim: 365340 kB' 'KernelStack: 12720 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14085164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198300 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.381 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39522536 kB' 'MemAvailable: 43246152 kB' 'Buffers: 2696 kB' 'Cached: 16643888 kB' 'SwapCached: 0 kB' 'Active: 13576372 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964804 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433840 kB' 'Mapped: 182924 kB' 'Shmem: 12534284 kB' 'KReclaimable: 210308 kB' 'Slab: 575672 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 365364 kB' 'KernelStack: 12752 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14085180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198268 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.382 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.383 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.647 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.648 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39523380 kB' 'MemAvailable: 43246996 kB' 'Buffers: 2696 kB' 'Cached: 16643908 kB' 'SwapCached: 0 kB' 'Active: 13576496 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964928 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433944 kB' 'Mapped: 182848 kB' 'Shmem: 12534304 kB' 'KReclaimable: 210308 kB' 'Slab: 575688 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 365380 kB' 'KernelStack: 12768 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14085200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198268 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.649 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:02.650 nr_hugepages=1025 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.650 resv_hugepages=0 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.650 surplus_hugepages=0 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.650 anon_hugepages=0 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.650 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39523568 kB' 'MemAvailable: 43247184 kB' 'Buffers: 2696 kB' 'Cached: 16643928 kB' 'SwapCached: 0 kB' 'Active: 13576516 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964948 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433944 kB' 'Mapped: 182848 kB' 'Shmem: 12534324 kB' 'KReclaimable: 210308 kB' 'Slab: 575688 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 365380 kB' 'KernelStack: 12768 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14085224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198268 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.651 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.652 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 23677976 kB' 'MemUsed: 9198964 kB' 'SwapCached: 0 kB' 'Active: 6928740 kB' 'Inactive: 154584 kB' 'Active(anon): 6597580 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785320 kB' 'Mapped: 63316 kB' 'AnonPages: 301092 kB' 'Shmem: 6299576 kB' 'KernelStack: 7880 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98488 kB' 'Slab: 267412 kB' 'SReclaimable: 98488 kB' 'SUnreclaim: 168924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.653 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 15844504 kB' 'MemUsed: 11820284 kB' 'SwapCached: 0 kB' 'Active: 6647764 kB' 'Inactive: 3346148 kB' 'Active(anon): 6367356 kB' 'Inactive(anon): 0 kB' 'Active(file): 280408 kB' 'Inactive(file): 3346148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9861340 kB' 'Mapped: 119532 kB' 'AnonPages: 132808 kB' 'Shmem: 6234784 kB' 'KernelStack: 4872 kB' 'PageTables: 2672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111820 kB' 'Slab: 308276 kB' 'SReclaimable: 111820 kB' 'SUnreclaim: 196456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.654 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.655 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:02.656 node0=512 expecting 513 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:02.656 node1=513 expecting 512 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:02.656 00:04:02.656 real 0m1.545s 00:04:02.656 user 0m0.628s 00:04:02.656 sys 0m0.883s 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:02.656 01:32:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.656 ************************************ 00:04:02.656 END TEST odd_alloc 00:04:02.656 ************************************ 00:04:02.656 01:32:26 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:02.656 01:32:26 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:02.656 01:32:26 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:02.656 01:32:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.656 ************************************ 00:04:02.656 START TEST custom_alloc 00:04:02.656 ************************************ 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:02.656 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.657 01:32:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.034 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.034 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.034 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.034 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.034 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.034 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.034 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.034 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.034 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.034 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.034 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.034 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.034 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.034 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.034 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.034 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.034 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38459064 kB' 'MemAvailable: 42182680 kB' 'Buffers: 2696 kB' 'Cached: 16644012 kB' 'SwapCached: 0 kB' 'Active: 13576668 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965100 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433908 kB' 'Mapped: 182860 kB' 'Shmem: 12534408 kB' 'KReclaimable: 210308 kB' 'Slab: 575220 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364912 kB' 'KernelStack: 12768 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14085420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198412 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:04.034 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38460720 kB' 'MemAvailable: 42184336 kB' 'Buffers: 2696 kB' 'Cached: 16644012 kB' 'SwapCached: 0 kB' 'Active: 13578064 kB' 'Inactive: 3500732 kB' 'Active(anon): 12966496 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 435376 kB' 'Mapped: 183372 kB' 'Shmem: 12534408 kB' 'KReclaimable: 210308 kB' 'Slab: 575208 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364900 kB' 'KernelStack: 12752 kB' 'PageTables: 7484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14087716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198348 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.036 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38454936 kB' 'MemAvailable: 42178552 kB' 'Buffers: 2696 kB' 'Cached: 16644036 kB' 'SwapCached: 0 kB' 'Active: 13581760 kB' 'Inactive: 3500732 kB' 'Active(anon): 12970192 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 439044 kB' 'Mapped: 183296 kB' 'Shmem: 12534432 kB' 'KReclaimable: 210308 kB' 'Slab: 575176 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364868 kB' 'KernelStack: 12800 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14091580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198352 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.039 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:04.040 nr_hugepages=1536 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.040 resv_hugepages=0 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.040 surplus_hugepages=0 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.040 anon_hugepages=0 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 38454436 kB' 'MemAvailable: 42178052 kB' 'Buffers: 2696 kB' 'Cached: 16644056 kB' 'SwapCached: 0 kB' 'Active: 13576696 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965128 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 434000 kB' 'Mapped: 182904 kB' 'Shmem: 12534452 kB' 'KReclaimable: 210308 kB' 'Slab: 575176 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364868 kB' 'KernelStack: 12800 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14085484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198364 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.041 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.042 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 23668356 kB' 'MemUsed: 9208584 kB' 'SwapCached: 0 kB' 'Active: 6930808 kB' 'Inactive: 154584 kB' 'Active(anon): 6599648 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785320 kB' 'Mapped: 63328 kB' 'AnonPages: 303212 kB' 'Shmem: 6299576 kB' 'KernelStack: 7896 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98488 kB' 'Slab: 267232 kB' 'SReclaimable: 98488 kB' 'SUnreclaim: 168744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.302 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.303 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 14787168 kB' 'MemUsed: 12877620 kB' 'SwapCached: 0 kB' 'Active: 6649140 kB' 'Inactive: 3346148 kB' 'Active(anon): 6368732 kB' 'Inactive(anon): 0 kB' 'Active(file): 280408 kB' 'Inactive(file): 3346148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9861472 kB' 'Mapped: 119968 kB' 'AnonPages: 133992 kB' 'Shmem: 6234916 kB' 'KernelStack: 4856 kB' 'PageTables: 2744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111820 kB' 'Slab: 307944 kB' 'SReclaimable: 111820 kB' 'SUnreclaim: 196124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.304 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.305 node0=512 expecting 512 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:04.305 node1=1024 expecting 1024 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:04.305 00:04:04.305 real 0m1.540s 00:04:04.305 user 0m0.662s 00:04:04.305 sys 0m0.842s 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:04.305 01:32:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.305 ************************************ 00:04:04.305 END TEST custom_alloc 00:04:04.305 ************************************ 00:04:04.305 01:32:28 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:04.305 01:32:28 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:04.305 01:32:28 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:04.305 01:32:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.305 ************************************ 00:04:04.305 START TEST no_shrink_alloc 00:04:04.305 ************************************ 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.305 01:32:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.731 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.731 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.731 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.731 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.731 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.731 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.731 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.731 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.731 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.731 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.731 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.731 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.731 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.731 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.731 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.731 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.731 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39440172 kB' 'MemAvailable: 43163788 kB' 'Buffers: 2696 kB' 'Cached: 16644148 kB' 'SwapCached: 0 kB' 'Active: 13577556 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965988 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 434680 kB' 'Mapped: 182920 kB' 'Shmem: 12534544 kB' 'KReclaimable: 210308 kB' 'Slab: 574976 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364668 kB' 'KernelStack: 12832 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14085816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198348 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.732 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39440172 kB' 'MemAvailable: 43163788 kB' 'Buffers: 2696 kB' 'Cached: 16644152 kB' 'SwapCached: 0 kB' 'Active: 13578344 kB' 'Inactive: 3500732 kB' 'Active(anon): 12966776 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 435120 kB' 'Mapped: 183004 kB' 'Shmem: 12534548 kB' 'KReclaimable: 210308 kB' 'Slab: 574992 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364684 kB' 'KernelStack: 12816 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14085836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198316 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39440204 kB' 'MemAvailable: 43163820 kB' 'Buffers: 2696 kB' 'Cached: 16644152 kB' 'SwapCached: 0 kB' 'Active: 13577212 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965644 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 434340 kB' 'Mapped: 182956 kB' 'Shmem: 12534548 kB' 'KReclaimable: 210308 kB' 'Slab: 574992 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364684 kB' 'KernelStack: 12784 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14085856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198300 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.735 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.737 nr_hugepages=1024 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.737 resv_hugepages=0 00:04:05.737 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.737 surplus_hugepages=0 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.738 anon_hugepages=0 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39440640 kB' 'MemAvailable: 43164256 kB' 'Buffers: 2696 kB' 'Cached: 16644188 kB' 'SwapCached: 0 kB' 'Active: 13577332 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965764 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 434432 kB' 'Mapped: 182880 kB' 'Shmem: 12534584 kB' 'KReclaimable: 210308 kB' 'Slab: 574984 kB' 'SReclaimable: 210308 kB' 'SUnreclaim: 364676 kB' 'KernelStack: 12768 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14085880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198300 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.738 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.739 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22641372 kB' 'MemUsed: 10235568 kB' 'SwapCached: 0 kB' 'Active: 6927928 kB' 'Inactive: 154584 kB' 'Active(anon): 6596768 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785400 kB' 'Mapped: 63340 kB' 'AnonPages: 300240 kB' 'Shmem: 6299656 kB' 'KernelStack: 7896 kB' 'PageTables: 4800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98488 kB' 'Slab: 267204 kB' 'SReclaimable: 98488 kB' 'SUnreclaim: 168716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.740 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.741 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.742 node0=1024 expecting 1024 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.742 01:32:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.122 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.122 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.122 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.122 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.122 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.122 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.122 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.122 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.122 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.122 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.122 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.122 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.122 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.122 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.122 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.122 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.122 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.122 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39436752 kB' 'MemAvailable: 43160364 kB' 'Buffers: 2696 kB' 'Cached: 16644256 kB' 'SwapCached: 0 kB' 'Active: 13576080 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964512 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433100 kB' 'Mapped: 183060 kB' 'Shmem: 12534652 kB' 'KReclaimable: 210300 kB' 'Slab: 574972 kB' 'SReclaimable: 210300 kB' 'SUnreclaim: 364672 kB' 'KernelStack: 12768 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14086124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198332 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.122 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.123 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39437808 kB' 'MemAvailable: 43161416 kB' 'Buffers: 2696 kB' 'Cached: 16644256 kB' 'SwapCached: 0 kB' 'Active: 13576808 kB' 'Inactive: 3500732 kB' 'Active(anon): 12965240 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433760 kB' 'Mapped: 182924 kB' 'Shmem: 12534652 kB' 'KReclaimable: 210292 kB' 'Slab: 575000 kB' 'SReclaimable: 210292 kB' 'SUnreclaim: 364708 kB' 'KernelStack: 12832 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14086144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198284 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.124 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39437808 kB' 'MemAvailable: 43161416 kB' 'Buffers: 2696 kB' 'Cached: 16644256 kB' 'SwapCached: 0 kB' 'Active: 13575752 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964184 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 432700 kB' 'Mapped: 182884 kB' 'Shmem: 12534652 kB' 'KReclaimable: 210292 kB' 'Slab: 575000 kB' 'SReclaimable: 210292 kB' 'SUnreclaim: 364708 kB' 'KernelStack: 12816 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14086164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198300 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.125 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.126 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.386 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.387 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.388 nr_hugepages=1024 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.388 resv_hugepages=0 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.388 surplus_hugepages=0 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.388 anon_hugepages=0 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 39441744 kB' 'MemAvailable: 43165352 kB' 'Buffers: 2696 kB' 'Cached: 16644300 kB' 'SwapCached: 0 kB' 'Active: 13576264 kB' 'Inactive: 3500732 kB' 'Active(anon): 12964696 kB' 'Inactive(anon): 0 kB' 'Active(file): 611568 kB' 'Inactive(file): 3500732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433240 kB' 'Mapped: 182884 kB' 'Shmem: 12534696 kB' 'KReclaimable: 210292 kB' 'Slab: 574960 kB' 'SReclaimable: 210292 kB' 'SUnreclaim: 364668 kB' 'KernelStack: 12848 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14086188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198316 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1656412 kB' 'DirectMap2M: 19234816 kB' 'DirectMap1G: 48234496 kB' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.388 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.389 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22635384 kB' 'MemUsed: 10241556 kB' 'SwapCached: 0 kB' 'Active: 6927728 kB' 'Inactive: 154584 kB' 'Active(anon): 6596568 kB' 'Inactive(anon): 0 kB' 'Active(file): 331160 kB' 'Inactive(file): 154584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6785496 kB' 'Mapped: 63344 kB' 'AnonPages: 299904 kB' 'Shmem: 6299752 kB' 'KernelStack: 7912 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98488 kB' 'Slab: 267244 kB' 'SReclaimable: 98488 kB' 'SUnreclaim: 168756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.390 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.391 node0=1024 expecting 1024 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.391 00:04:07.391 real 0m3.060s 00:04:07.391 user 0m1.211s 00:04:07.391 sys 0m1.780s 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:07.391 01:32:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.391 ************************************ 00:04:07.391 END TEST no_shrink_alloc 00:04:07.391 ************************************ 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.391 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.392 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.392 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.392 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.392 01:32:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.392 00:04:07.392 real 0m12.333s 00:04:07.392 user 0m4.626s 00:04:07.392 sys 0m6.501s 00:04:07.392 01:32:31 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:07.392 01:32:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.392 ************************************ 00:04:07.392 END TEST hugepages 00:04:07.392 ************************************ 00:04:07.392 01:32:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.392 01:32:31 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:07.392 01:32:31 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:07.392 01:32:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.392 ************************************ 00:04:07.392 START TEST driver 00:04:07.392 ************************************ 00:04:07.392 01:32:31 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.392 * Looking for test storage... 00:04:07.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.392 01:32:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:07.392 01:32:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.392 01:32:31 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.921 01:32:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:09.921 01:32:33 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:09.921 01:32:33 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:09.921 01:32:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.921 ************************************ 00:04:09.921 START TEST guess_driver 00:04:09.921 ************************************ 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:09.921 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:09.921 Looking for driver=vfio-pci 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.921 01:32:33 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.294 01:32:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.231 01:32:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.231 01:32:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.231 01:32:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.231 01:32:36 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:12.231 01:32:36 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:12.231 01:32:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.231 01:32:36 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.763 00:04:14.763 real 0m4.892s 00:04:14.763 user 0m1.139s 00:04:14.763 sys 0m1.870s 00:04:14.763 01:32:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:14.763 01:32:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.763 ************************************ 00:04:14.763 END TEST guess_driver 00:04:14.763 ************************************ 00:04:14.763 00:04:14.763 real 0m7.455s 00:04:14.763 user 0m1.747s 00:04:14.763 sys 0m2.976s 00:04:14.763 01:32:38 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:14.763 01:32:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.763 ************************************ 00:04:14.763 END TEST driver 00:04:14.763 ************************************ 00:04:14.763 01:32:38 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:14.763 01:32:38 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:14.763 01:32:38 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:14.763 01:32:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.021 ************************************ 00:04:15.021 START TEST devices 00:04:15.021 ************************************ 00:04:15.021 01:32:38 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:15.021 * Looking for test storage... 00:04:15.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.021 01:32:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:15.021 01:32:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:15.021 01:32:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.022 01:32:38 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:16.951 01:32:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.951 No valid GPT data, bailing 00:04:16.951 01:32:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:16.951 01:32:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.951 01:32:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.951 01:32:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:16.951 01:32:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:16.951 ************************************ 00:04:16.951 START TEST nvme_mount 00:04:16.951 ************************************ 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.951 01:32:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.888 Creating new GPT entries in memory. 00:04:17.888 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.888 other utilities. 00:04:17.889 01:32:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.889 01:32:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.889 01:32:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.889 01:32:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.889 01:32:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.826 Creating new GPT entries in memory. 00:04:18.826 The operation has completed successfully. 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3900361 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.826 01:32:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.198 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.199 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.199 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.199 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.199 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:20.199 01:32:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.199 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.199 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.455 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:20.455 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:20.455 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.455 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.455 01:32:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.828 01:32:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.202 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.461 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.461 00:04:23.461 real 0m6.701s 00:04:23.461 user 0m1.668s 00:04:23.461 sys 0m2.651s 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:23.461 01:32:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:23.461 ************************************ 00:04:23.461 END TEST nvme_mount 00:04:23.461 ************************************ 00:04:23.461 01:32:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:23.461 01:32:47 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:23.461 01:32:47 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:23.461 01:32:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.461 ************************************ 00:04:23.461 START TEST dm_mount 00:04:23.461 ************************************ 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.461 01:32:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:24.398 Creating new GPT entries in memory. 00:04:24.398 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.398 other utilities. 00:04:24.398 01:32:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.398 01:32:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.398 01:32:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.398 01:32:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.398 01:32:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.800 Creating new GPT entries in memory. 00:04:25.800 The operation has completed successfully. 00:04:25.800 01:32:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.800 01:32:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.800 01:32:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.800 01:32:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.800 01:32:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:26.735 The operation has completed successfully. 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3903030 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.735 01:32:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.670 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.929 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.930 01:32:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:29.304 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:29.563 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:29.563 00:04:29.563 real 0m6.040s 00:04:29.563 user 0m1.111s 00:04:29.563 sys 0m1.841s 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:29.563 01:32:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:29.563 ************************************ 00:04:29.563 END TEST dm_mount 00:04:29.563 ************************************ 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.563 01:32:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.822 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.822 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.822 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.822 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.822 01:32:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:29.822 00:04:29.822 real 0m14.909s 00:04:29.822 user 0m3.506s 00:04:29.822 sys 0m5.700s 00:04:29.822 01:32:53 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:29.822 01:32:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.822 ************************************ 00:04:29.822 END TEST devices 00:04:29.822 ************************************ 00:04:29.822 00:04:29.822 real 0m46.133s 00:04:29.822 user 0m13.536s 00:04:29.822 sys 0m21.204s 00:04:29.822 01:32:53 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:29.822 01:32:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.822 ************************************ 00:04:29.822 END TEST setup.sh 00:04:29.822 ************************************ 00:04:29.822 01:32:53 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:31.196 Hugepages 00:04:31.196 node hugesize free / total 00:04:31.196 node0 1048576kB 0 / 0 00:04:31.196 node0 2048kB 2048 / 2048 00:04:31.196 node1 1048576kB 0 / 0 00:04:31.196 node1 2048kB 0 / 0 00:04:31.196 00:04:31.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.196 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:31.196 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:31.196 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:31.454 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:31.454 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:31.454 01:32:55 -- spdk/autotest.sh@130 -- # uname -s 00:04:31.454 01:32:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:31.454 01:32:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:31.454 01:32:55 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.829 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.829 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.829 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.829 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.829 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.830 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.830 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.830 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.830 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.830 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.767 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.767 01:32:57 -- common/autotest_common.sh@1529 -- # sleep 1 00:04:34.703 01:32:58 -- common/autotest_common.sh@1530 -- # bdfs=() 00:04:34.703 01:32:58 -- common/autotest_common.sh@1530 -- # local bdfs 00:04:34.703 01:32:58 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.961 01:32:58 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:04:34.961 01:32:58 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:34.961 01:32:58 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:34.961 01:32:58 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.961 01:32:58 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:34.961 01:32:58 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:34.961 01:32:58 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:34.961 01:32:58 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:04:34.961 01:32:58 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.333 Waiting for block devices as requested 00:04:36.333 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:36.333 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:36.333 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:36.333 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:36.591 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:36.591 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:36.591 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:36.591 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:36.850 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:36.850 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:36.850 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:36.850 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:37.107 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:37.107 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:37.107 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:37.107 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:37.365 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:37.365 01:33:01 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:37.365 01:33:01 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1499 -- # grep 0000:0b:00.0/nvme/nvme 00:04:37.365 01:33:01 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:37.365 01:33:01 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:37.365 01:33:01 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:37.365 01:33:01 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:37.365 01:33:01 -- common/autotest_common.sh@1542 -- # oacs=' 0xf' 00:04:37.365 01:33:01 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:37.365 01:33:01 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:37.365 01:33:01 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:37.365 01:33:01 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:37.365 01:33:01 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:37.365 01:33:01 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:37.365 01:33:01 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:37.365 01:33:01 -- common/autotest_common.sh@1554 -- # continue 00:04:37.365 01:33:01 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:37.365 01:33:01 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:37.365 01:33:01 -- common/autotest_common.sh@10 -- # set +x 00:04:37.365 01:33:01 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:37.365 01:33:01 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:37.365 01:33:01 -- common/autotest_common.sh@10 -- # set +x 00:04:37.365 01:33:01 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.741 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:38.741 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:38.741 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:39.676 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.933 01:33:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:39.933 01:33:03 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:39.933 01:33:03 -- common/autotest_common.sh@10 -- # set +x 00:04:39.933 01:33:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:39.933 01:33:03 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:39.933 01:33:03 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:39.933 01:33:03 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:39.933 01:33:03 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:39.933 01:33:03 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:39.933 01:33:03 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:39.933 01:33:03 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:39.933 01:33:03 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.933 01:33:03 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:39.933 01:33:03 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:39.933 01:33:03 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:39.933 01:33:03 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:04:39.933 01:33:03 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:39.933 01:33:03 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:39.933 01:33:03 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:04:39.933 01:33:03 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:39.933 01:33:03 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:04:39.933 01:33:03 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:0b:00.0 00:04:39.933 01:33:03 -- common/autotest_common.sh@1589 -- # [[ -z 0000:0b:00.0 ]] 00:04:39.933 01:33:03 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=3909147 00:04:39.933 01:33:03 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.933 01:33:03 -- common/autotest_common.sh@1595 -- # waitforlisten 3909147 00:04:39.933 01:33:03 -- common/autotest_common.sh@828 -- # '[' -z 3909147 ']' 00:04:39.933 01:33:03 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.933 01:33:03 -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:39.933 01:33:03 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.933 01:33:03 -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:39.933 01:33:03 -- common/autotest_common.sh@10 -- # set +x 00:04:39.933 [2024-05-15 01:33:03.814569] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:04:39.933 [2024-05-15 01:33:03.814664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909147 ] 00:04:39.933 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.191 [2024-05-15 01:33:03.887255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.191 [2024-05-15 01:33:03.975008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.449 01:33:04 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:40.449 01:33:04 -- common/autotest_common.sh@861 -- # return 0 00:04:40.449 01:33:04 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:04:40.449 01:33:04 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:04:40.449 01:33:04 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:43.731 nvme0n1 00:04:43.731 01:33:07 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:43.731 [2024-05-15 01:33:07.561481] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:43.731 [2024-05-15 01:33:07.561527] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:43.731 request: 00:04:43.731 { 00:04:43.731 "nvme_ctrlr_name": "nvme0", 00:04:43.731 "password": "test", 00:04:43.731 "method": "bdev_nvme_opal_revert", 00:04:43.731 "req_id": 1 00:04:43.731 } 00:04:43.731 Got JSON-RPC error response 00:04:43.731 response: 00:04:43.731 { 00:04:43.731 "code": -32603, 00:04:43.731 "message": "Internal error" 00:04:43.731 } 00:04:43.731 01:33:07 -- common/autotest_common.sh@1601 -- # true 00:04:43.731 01:33:07 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:04:43.731 01:33:07 -- common/autotest_common.sh@1605 -- # killprocess 3909147 00:04:43.731 01:33:07 -- common/autotest_common.sh@947 -- # '[' -z 3909147 ']' 00:04:43.731 01:33:07 -- common/autotest_common.sh@951 -- # kill -0 3909147 00:04:43.731 01:33:07 -- common/autotest_common.sh@952 -- # uname 00:04:43.731 01:33:07 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:43.731 01:33:07 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3909147 00:04:43.731 01:33:07 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:43.731 01:33:07 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:43.731 01:33:07 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3909147' 00:04:43.731 killing process with pid 3909147 00:04:43.731 01:33:07 -- common/autotest_common.sh@966 -- # kill 3909147 00:04:43.731 01:33:07 -- common/autotest_common.sh@971 -- # wait 3909147 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.989 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.990 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.991 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:45.397 01:33:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:45.397 01:33:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:45.397 01:33:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.397 01:33:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.397 01:33:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:45.397 01:33:09 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:45.397 01:33:09 -- common/autotest_common.sh@10 -- # set +x 00:04:45.397 01:33:09 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.397 01:33:09 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.397 01:33:09 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.397 01:33:09 -- common/autotest_common.sh@10 -- # set +x 00:04:45.397 ************************************ 00:04:45.397 START TEST env 00:04:45.397 ************************************ 00:04:45.397 01:33:09 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.654 * Looking for test storage... 00:04:45.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:45.654 01:33:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.654 01:33:09 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.654 01:33:09 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.654 01:33:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.654 ************************************ 00:04:45.654 START TEST env_memory 00:04:45.654 ************************************ 00:04:45.654 01:33:09 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.654 00:04:45.654 00:04:45.654 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.654 http://cunit.sourceforge.net/ 00:04:45.654 00:04:45.654 00:04:45.654 Suite: memory 00:04:45.654 Test: alloc and free memory map ...[2024-05-15 01:33:09.432657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.655 passed 00:04:45.655 Test: mem map translation ...[2024-05-15 01:33:09.453419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.655 [2024-05-15 01:33:09.453442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.655 [2024-05-15 01:33:09.453485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.655 [2024-05-15 01:33:09.453497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.655 passed 00:04:45.655 Test: mem map registration ...[2024-05-15 01:33:09.495689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:45.655 [2024-05-15 01:33:09.495712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:45.655 passed 00:04:45.655 Test: mem map adjacent registrations ...passed 00:04:45.655 00:04:45.655 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.655 suites 1 1 n/a 0 0 00:04:45.655 tests 4 4 4 0 0 00:04:45.655 asserts 152 152 152 0 n/a 00:04:45.655 00:04:45.655 Elapsed time = 0.145 seconds 00:04:45.655 00:04:45.655 real 0m0.152s 00:04:45.655 user 0m0.146s 00:04:45.655 sys 0m0.006s 00:04:45.655 01:33:09 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.655 01:33:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:45.655 ************************************ 00:04:45.655 END TEST env_memory 00:04:45.655 ************************************ 00:04:45.655 01:33:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.655 01:33:09 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.655 01:33:09 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.655 01:33:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.922 ************************************ 00:04:45.922 START TEST env_vtophys 00:04:45.922 ************************************ 00:04:45.922 01:33:09 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.922 EAL: lib.eal log level changed from notice to debug 00:04:45.922 EAL: Detected lcore 0 as core 0 on socket 0 00:04:45.922 EAL: Detected lcore 1 as core 1 on socket 0 00:04:45.922 EAL: Detected lcore 2 as core 2 on socket 0 00:04:45.922 EAL: Detected lcore 3 as core 3 on socket 0 00:04:45.922 EAL: Detected lcore 4 as core 4 on socket 0 00:04:45.922 EAL: Detected lcore 5 as core 5 on socket 0 00:04:45.922 EAL: Detected lcore 6 as core 8 on socket 0 00:04:45.922 EAL: Detected lcore 7 as core 9 on socket 0 00:04:45.922 EAL: Detected lcore 8 as core 10 on socket 0 00:04:45.922 EAL: Detected lcore 9 as core 11 on socket 0 00:04:45.922 EAL: Detected lcore 10 as core 12 on socket 0 00:04:45.922 EAL: Detected lcore 11 as core 13 on socket 0 00:04:45.922 EAL: Detected lcore 12 as core 0 on socket 1 00:04:45.922 EAL: Detected lcore 13 as core 1 on socket 1 00:04:45.922 EAL: Detected lcore 14 as core 2 on socket 1 00:04:45.922 EAL: Detected lcore 15 as core 3 on socket 1 00:04:45.922 EAL: Detected lcore 16 as core 4 on socket 1 00:04:45.922 EAL: Detected lcore 17 as core 5 on socket 1 00:04:45.922 EAL: Detected lcore 18 as core 8 on socket 1 00:04:45.922 EAL: Detected lcore 19 as core 9 on socket 1 00:04:45.922 EAL: Detected lcore 20 as core 10 on socket 1 00:04:45.922 EAL: Detected lcore 21 as core 11 on socket 1 00:04:45.922 EAL: Detected lcore 22 as core 12 on socket 1 00:04:45.922 EAL: Detected lcore 23 as core 13 on socket 1 00:04:45.922 EAL: Detected lcore 24 as core 0 on socket 0 00:04:45.922 EAL: Detected lcore 25 as core 1 on socket 0 00:04:45.922 EAL: Detected lcore 26 as core 2 on socket 0 00:04:45.922 EAL: Detected lcore 27 as core 3 on socket 0 00:04:45.922 EAL: Detected lcore 28 as core 4 on socket 0 00:04:45.922 EAL: Detected lcore 29 as core 5 on socket 0 00:04:45.922 EAL: Detected lcore 30 as core 8 on socket 0 00:04:45.922 EAL: Detected lcore 31 as core 9 on socket 0 00:04:45.922 EAL: Detected lcore 32 as core 10 on socket 0 00:04:45.922 EAL: Detected lcore 33 as core 11 on socket 0 00:04:45.922 EAL: Detected lcore 34 as core 12 on socket 0 00:04:45.922 EAL: Detected lcore 35 as core 13 on socket 0 00:04:45.922 EAL: Detected lcore 36 as core 0 on socket 1 00:04:45.922 EAL: Detected lcore 37 as core 1 on socket 1 00:04:45.922 EAL: Detected lcore 38 as core 2 on socket 1 00:04:45.922 EAL: Detected lcore 39 as core 3 on socket 1 00:04:45.922 EAL: Detected lcore 40 as core 4 on socket 1 00:04:45.922 EAL: Detected lcore 41 as core 5 on socket 1 00:04:45.922 EAL: Detected lcore 42 as core 8 on socket 1 00:04:45.922 EAL: Detected lcore 43 as core 9 on socket 1 00:04:45.922 EAL: Detected lcore 44 as core 10 on socket 1 00:04:45.922 EAL: Detected lcore 45 as core 11 on socket 1 00:04:45.922 EAL: Detected lcore 46 as core 12 on socket 1 00:04:45.922 EAL: Detected lcore 47 as core 13 on socket 1 00:04:45.922 EAL: Maximum logical cores by configuration: 128 00:04:45.922 EAL: Detected CPU lcores: 48 00:04:45.922 EAL: Detected NUMA nodes: 2 00:04:45.922 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:45.922 EAL: Detected shared linkage of DPDK 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:45.922 EAL: Registered [vdev] bus. 00:04:45.922 EAL: bus.vdev log level changed from disabled to notice 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:45.922 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:45.922 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:45.922 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:45.922 EAL: No shared files mode enabled, IPC will be disabled 00:04:45.922 EAL: No shared files mode enabled, IPC is disabled 00:04:45.922 EAL: Bus pci wants IOVA as 'DC' 00:04:45.922 EAL: Bus vdev wants IOVA as 'DC' 00:04:45.922 EAL: Buses did not request a specific IOVA mode. 00:04:45.922 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:45.922 EAL: Selected IOVA mode 'VA' 00:04:45.922 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.923 EAL: Probing VFIO support... 00:04:45.923 EAL: IOMMU type 1 (Type 1) is supported 00:04:45.923 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:45.923 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:45.923 EAL: VFIO support initialized 00:04:45.923 EAL: Ask a virtual area of 0x2e000 bytes 00:04:45.923 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:45.923 EAL: Setting up physically contiguous memory... 00:04:45.923 EAL: Setting maximum number of open files to 524288 00:04:45.923 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:45.923 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:45.923 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:45.923 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:45.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.923 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:45.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.923 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:45.923 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:45.923 EAL: Hugepages will be freed exactly as allocated. 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: TSC frequency is ~2700000 KHz 00:04:45.923 EAL: Main lcore 0 is ready (tid=7f869e314a00;cpuset=[0]) 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 0 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 2MB 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:45.923 EAL: Mem event callback 'spdk:(nil)' registered 00:04:45.923 00:04:45.923 00:04:45.923 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.923 http://cunit.sourceforge.net/ 00:04:45.923 00:04:45.923 00:04:45.923 Suite: components_suite 00:04:45.923 Test: vtophys_malloc_test ...passed 00:04:45.923 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 4MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 4MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 6MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 6MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 10MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 10MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 18MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 18MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 34MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 34MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 66MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 66MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.923 EAL: Restoring previous memory policy: 4 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was expanded by 130MB 00:04:45.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.923 EAL: request: mp_malloc_sync 00:04:45.923 EAL: No shared files mode enabled, IPC is disabled 00:04:45.923 EAL: Heap on socket 0 was shrunk by 130MB 00:04:45.923 EAL: Trying to obtain current memory policy. 00:04:45.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.182 EAL: Restoring previous memory policy: 4 00:04:46.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.182 EAL: request: mp_malloc_sync 00:04:46.182 EAL: No shared files mode enabled, IPC is disabled 00:04:46.182 EAL: Heap on socket 0 was expanded by 258MB 00:04:46.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.182 EAL: request: mp_malloc_sync 00:04:46.182 EAL: No shared files mode enabled, IPC is disabled 00:04:46.182 EAL: Heap on socket 0 was shrunk by 258MB 00:04:46.182 EAL: Trying to obtain current memory policy. 00:04:46.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.440 EAL: Restoring previous memory policy: 4 00:04:46.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.440 EAL: request: mp_malloc_sync 00:04:46.440 EAL: No shared files mode enabled, IPC is disabled 00:04:46.440 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.698 EAL: request: mp_malloc_sync 00:04:46.698 EAL: No shared files mode enabled, IPC is disabled 00:04:46.698 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.698 EAL: Trying to obtain current memory policy. 00:04:46.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.955 EAL: Restoring previous memory policy: 4 00:04:46.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.955 EAL: request: mp_malloc_sync 00:04:46.955 EAL: No shared files mode enabled, IPC is disabled 00:04:46.955 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.214 EAL: request: mp_malloc_sync 00:04:47.214 EAL: No shared files mode enabled, IPC is disabled 00:04:47.214 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:47.214 passed 00:04:47.214 00:04:47.214 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.214 suites 1 1 n/a 0 0 00:04:47.214 tests 2 2 2 0 0 00:04:47.214 asserts 497 497 497 0 n/a 00:04:47.214 00:04:47.214 Elapsed time = 1.379 seconds 00:04:47.214 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.214 EAL: request: mp_malloc_sync 00:04:47.214 EAL: No shared files mode enabled, IPC is disabled 00:04:47.214 EAL: Heap on socket 0 was shrunk by 2MB 00:04:47.214 EAL: No shared files mode enabled, IPC is disabled 00:04:47.214 EAL: No shared files mode enabled, IPC is disabled 00:04:47.215 EAL: No shared files mode enabled, IPC is disabled 00:04:47.215 00:04:47.215 real 0m1.500s 00:04:47.215 user 0m0.850s 00:04:47.215 sys 0m0.618s 00:04:47.215 01:33:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.215 01:33:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:47.215 ************************************ 00:04:47.215 END TEST env_vtophys 00:04:47.215 ************************************ 00:04:47.215 01:33:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.215 01:33:11 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:47.215 01:33:11 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:47.215 01:33:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.474 ************************************ 00:04:47.474 START TEST env_pci 00:04:47.474 ************************************ 00:04:47.474 01:33:11 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:47.474 00:04:47.474 00:04:47.474 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.474 http://cunit.sourceforge.net/ 00:04:47.474 00:04:47.474 00:04:47.474 Suite: pci 00:04:47.474 Test: pci_hook ...[2024-05-15 01:33:11.160376] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3910545 has claimed it 00:04:47.474 EAL: Cannot find device (10000:00:01.0) 00:04:47.474 EAL: Failed to attach device on primary process 00:04:47.474 passed 00:04:47.474 00:04:47.474 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.474 suites 1 1 n/a 0 0 00:04:47.474 tests 1 1 1 0 0 00:04:47.474 asserts 25 25 25 0 n/a 00:04:47.474 00:04:47.474 Elapsed time = 0.022 seconds 00:04:47.474 00:04:47.474 real 0m0.032s 00:04:47.474 user 0m0.009s 00:04:47.474 sys 0m0.023s 00:04:47.474 01:33:11 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.474 01:33:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:47.474 ************************************ 00:04:47.474 END TEST env_pci 00:04:47.474 ************************************ 00:04:47.474 01:33:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.474 01:33:11 env -- env/env.sh@15 -- # uname 00:04:47.474 01:33:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.474 01:33:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.474 01:33:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.474 01:33:11 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:47.474 01:33:11 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:47.474 01:33:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.474 ************************************ 00:04:47.474 START TEST env_dpdk_post_init 00:04:47.474 ************************************ 00:04:47.474 01:33:11 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.474 EAL: Detected CPU lcores: 48 00:04:47.474 EAL: Detected NUMA nodes: 2 00:04:47.474 EAL: Detected shared linkage of DPDK 00:04:47.474 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.474 EAL: Selected IOVA mode 'VA' 00:04:47.474 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.474 EAL: VFIO support initialized 00:04:47.474 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.474 EAL: Using IOMMU type 1 (Type 1) 00:04:47.474 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:47.474 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:47.474 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:47.733 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:47.733 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:47.733 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:47.733 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:47.733 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:48.300 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:48.301 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:48.301 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:48.558 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:48.558 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:48.558 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:48.558 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:48.558 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:48.558 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:51.838 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:51.838 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:51.838 Starting DPDK initialization... 00:04:51.838 Starting SPDK post initialization... 00:04:51.838 SPDK NVMe probe 00:04:51.838 Attaching to 0000:0b:00.0 00:04:51.838 Attached to 0000:0b:00.0 00:04:51.838 Cleaning up... 00:04:51.838 00:04:51.838 real 0m4.369s 00:04:51.838 user 0m3.217s 00:04:51.838 sys 0m0.212s 00:04:51.838 01:33:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:51.838 01:33:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.838 ************************************ 00:04:51.838 END TEST env_dpdk_post_init 00:04:51.838 ************************************ 00:04:51.838 01:33:15 env -- env/env.sh@26 -- # uname 00:04:51.838 01:33:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.838 01:33:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.838 01:33:15 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:51.838 01:33:15 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:51.838 01:33:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.838 ************************************ 00:04:51.838 START TEST env_mem_callbacks 00:04:51.838 ************************************ 00:04:51.838 01:33:15 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.838 EAL: Detected CPU lcores: 48 00:04:51.838 EAL: Detected NUMA nodes: 2 00:04:51.838 EAL: Detected shared linkage of DPDK 00:04:51.838 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.838 EAL: Selected IOVA mode 'VA' 00:04:51.838 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.838 EAL: VFIO support initialized 00:04:51.838 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.838 00:04:51.838 00:04:51.838 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.838 http://cunit.sourceforge.net/ 00:04:51.838 00:04:51.838 00:04:51.838 Suite: memory 00:04:51.838 Test: test ... 00:04:51.838 register 0x200000200000 2097152 00:04:51.838 malloc 3145728 00:04:51.838 register 0x200000400000 4194304 00:04:51.838 buf 0x200000500000 len 3145728 PASSED 00:04:51.838 malloc 64 00:04:51.838 buf 0x2000004fff40 len 64 PASSED 00:04:51.838 malloc 4194304 00:04:51.838 register 0x200000800000 6291456 00:04:51.838 buf 0x200000a00000 len 4194304 PASSED 00:04:51.838 free 0x200000500000 3145728 00:04:51.838 free 0x2000004fff40 64 00:04:51.838 unregister 0x200000400000 4194304 PASSED 00:04:51.838 free 0x200000a00000 4194304 00:04:51.838 unregister 0x200000800000 6291456 PASSED 00:04:51.838 malloc 8388608 00:04:51.838 register 0x200000400000 10485760 00:04:51.838 buf 0x200000600000 len 8388608 PASSED 00:04:51.838 free 0x200000600000 8388608 00:04:51.838 unregister 0x200000400000 10485760 PASSED 00:04:51.838 passed 00:04:51.838 00:04:51.838 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.838 suites 1 1 n/a 0 0 00:04:51.838 tests 1 1 1 0 0 00:04:51.838 asserts 15 15 15 0 n/a 00:04:51.838 00:04:51.838 Elapsed time = 0.005 seconds 00:04:51.838 00:04:51.838 real 0m0.051s 00:04:51.838 user 0m0.010s 00:04:51.838 sys 0m0.041s 00:04:51.838 01:33:15 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:51.838 01:33:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.838 ************************************ 00:04:51.838 END TEST env_mem_callbacks 00:04:51.838 ************************************ 00:04:51.838 00:04:51.838 real 0m6.415s 00:04:51.838 user 0m4.363s 00:04:51.838 sys 0m1.088s 00:04:51.838 01:33:15 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:51.838 01:33:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.838 ************************************ 00:04:51.838 END TEST env 00:04:51.838 ************************************ 00:04:51.838 01:33:15 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.838 01:33:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:51.838 01:33:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:51.838 01:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.096 ************************************ 00:04:52.096 START TEST rpc 00:04:52.096 ************************************ 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.096 * Looking for test storage... 00:04:52.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.096 01:33:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3911198 00:04:52.096 01:33:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:52.096 01:33:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.096 01:33:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3911198 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@828 -- # '[' -z 3911198 ']' 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:52.096 01:33:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.096 [2024-05-15 01:33:15.894236] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:04:52.096 [2024-05-15 01:33:15.894330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911198 ] 00:04:52.096 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.096 [2024-05-15 01:33:15.959378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.354 [2024-05-15 01:33:16.045227] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.354 [2024-05-15 01:33:16.045279] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3911198' to capture a snapshot of events at runtime. 00:04:52.354 [2024-05-15 01:33:16.045294] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:52.354 [2024-05-15 01:33:16.045306] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:52.354 [2024-05-15 01:33:16.045317] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3911198 for offline analysis/debug. 00:04:52.354 [2024-05-15 01:33:16.045345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.612 01:33:16 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:52.612 01:33:16 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:52.612 01:33:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.612 01:33:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.612 01:33:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.612 01:33:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.612 01:33:16 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:52.612 01:33:16 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:52.612 01:33:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 ************************************ 00:04:52.612 START TEST rpc_integrity 00:04:52.612 ************************************ 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.612 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.612 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.612 { 00:04:52.612 "name": "Malloc0", 00:04:52.612 "aliases": [ 00:04:52.612 "e607756e-3ae0-4d16-9739-6c3324cecce1" 00:04:52.612 ], 00:04:52.612 "product_name": "Malloc disk", 00:04:52.612 "block_size": 512, 00:04:52.612 "num_blocks": 16384, 00:04:52.612 "uuid": "e607756e-3ae0-4d16-9739-6c3324cecce1", 00:04:52.612 "assigned_rate_limits": { 00:04:52.612 "rw_ios_per_sec": 0, 00:04:52.612 "rw_mbytes_per_sec": 0, 00:04:52.612 "r_mbytes_per_sec": 0, 00:04:52.612 "w_mbytes_per_sec": 0 00:04:52.612 }, 00:04:52.613 "claimed": false, 00:04:52.613 "zoned": false, 00:04:52.613 "supported_io_types": { 00:04:52.613 "read": true, 00:04:52.613 "write": true, 00:04:52.613 "unmap": true, 00:04:52.613 "write_zeroes": true, 00:04:52.613 "flush": true, 00:04:52.613 "reset": true, 00:04:52.613 "compare": false, 00:04:52.613 "compare_and_write": false, 00:04:52.613 "abort": true, 00:04:52.613 "nvme_admin": false, 00:04:52.613 "nvme_io": false 00:04:52.613 }, 00:04:52.613 "memory_domains": [ 00:04:52.613 { 00:04:52.613 "dma_device_id": "system", 00:04:52.613 "dma_device_type": 1 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.613 "dma_device_type": 2 00:04:52.613 } 00:04:52.613 ], 00:04:52.613 "driver_specific": {} 00:04:52.613 } 00:04:52.613 ]' 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 [2024-05-15 01:33:16.436881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.613 [2024-05-15 01:33:16.436930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.613 [2024-05-15 01:33:16.436954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fe53a0 00:04:52.613 [2024-05-15 01:33:16.436969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.613 [2024-05-15 01:33:16.438464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.613 [2024-05-15 01:33:16.438491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.613 Passthru0 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.613 { 00:04:52.613 "name": "Malloc0", 00:04:52.613 "aliases": [ 00:04:52.613 "e607756e-3ae0-4d16-9739-6c3324cecce1" 00:04:52.613 ], 00:04:52.613 "product_name": "Malloc disk", 00:04:52.613 "block_size": 512, 00:04:52.613 "num_blocks": 16384, 00:04:52.613 "uuid": "e607756e-3ae0-4d16-9739-6c3324cecce1", 00:04:52.613 "assigned_rate_limits": { 00:04:52.613 "rw_ios_per_sec": 0, 00:04:52.613 "rw_mbytes_per_sec": 0, 00:04:52.613 "r_mbytes_per_sec": 0, 00:04:52.613 "w_mbytes_per_sec": 0 00:04:52.613 }, 00:04:52.613 "claimed": true, 00:04:52.613 "claim_type": "exclusive_write", 00:04:52.613 "zoned": false, 00:04:52.613 "supported_io_types": { 00:04:52.613 "read": true, 00:04:52.613 "write": true, 00:04:52.613 "unmap": true, 00:04:52.613 "write_zeroes": true, 00:04:52.613 "flush": true, 00:04:52.613 "reset": true, 00:04:52.613 "compare": false, 00:04:52.613 "compare_and_write": false, 00:04:52.613 "abort": true, 00:04:52.613 "nvme_admin": false, 00:04:52.613 "nvme_io": false 00:04:52.613 }, 00:04:52.613 "memory_domains": [ 00:04:52.613 { 00:04:52.613 "dma_device_id": "system", 00:04:52.613 "dma_device_type": 1 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.613 "dma_device_type": 2 00:04:52.613 } 00:04:52.613 ], 00:04:52.613 "driver_specific": {} 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "name": "Passthru0", 00:04:52.613 "aliases": [ 00:04:52.613 "0445e28b-a95c-5150-a178-87660f5baf41" 00:04:52.613 ], 00:04:52.613 "product_name": "passthru", 00:04:52.613 "block_size": 512, 00:04:52.613 "num_blocks": 16384, 00:04:52.613 "uuid": "0445e28b-a95c-5150-a178-87660f5baf41", 00:04:52.613 "assigned_rate_limits": { 00:04:52.613 "rw_ios_per_sec": 0, 00:04:52.613 "rw_mbytes_per_sec": 0, 00:04:52.613 "r_mbytes_per_sec": 0, 00:04:52.613 "w_mbytes_per_sec": 0 00:04:52.613 }, 00:04:52.613 "claimed": false, 00:04:52.613 "zoned": false, 00:04:52.613 "supported_io_types": { 00:04:52.613 "read": true, 00:04:52.613 "write": true, 00:04:52.613 "unmap": true, 00:04:52.613 "write_zeroes": true, 00:04:52.613 "flush": true, 00:04:52.613 "reset": true, 00:04:52.613 "compare": false, 00:04:52.613 "compare_and_write": false, 00:04:52.613 "abort": true, 00:04:52.613 "nvme_admin": false, 00:04:52.613 "nvme_io": false 00:04:52.613 }, 00:04:52.613 "memory_domains": [ 00:04:52.613 { 00:04:52.613 "dma_device_id": "system", 00:04:52.613 "dma_device_type": 1 00:04:52.613 }, 00:04:52.613 { 00:04:52.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.613 "dma_device_type": 2 00:04:52.613 } 00:04:52.613 ], 00:04:52.613 "driver_specific": { 00:04:52.613 "passthru": { 00:04:52.613 "name": "Passthru0", 00:04:52.613 "base_bdev_name": "Malloc0" 00:04:52.613 } 00:04:52.613 } 00:04:52.613 } 00:04:52.613 ]' 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.613 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.613 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.872 01:33:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.872 00:04:52.872 real 0m0.228s 00:04:52.872 user 0m0.154s 00:04:52.872 sys 0m0.020s 00:04:52.872 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:52.872 01:33:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.872 ************************************ 00:04:52.872 END TEST rpc_integrity 00:04:52.872 ************************************ 00:04:52.872 01:33:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.872 01:33:16 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:52.872 01:33:16 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:52.872 01:33:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.872 ************************************ 00:04:52.872 START TEST rpc_plugins 00:04:52.872 ************************************ 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.872 { 00:04:52.872 "name": "Malloc1", 00:04:52.872 "aliases": [ 00:04:52.872 "3f9cea9e-486e-4b73-9220-726440af8862" 00:04:52.872 ], 00:04:52.872 "product_name": "Malloc disk", 00:04:52.872 "block_size": 4096, 00:04:52.872 "num_blocks": 256, 00:04:52.872 "uuid": "3f9cea9e-486e-4b73-9220-726440af8862", 00:04:52.872 "assigned_rate_limits": { 00:04:52.872 "rw_ios_per_sec": 0, 00:04:52.872 "rw_mbytes_per_sec": 0, 00:04:52.872 "r_mbytes_per_sec": 0, 00:04:52.872 "w_mbytes_per_sec": 0 00:04:52.872 }, 00:04:52.872 "claimed": false, 00:04:52.872 "zoned": false, 00:04:52.872 "supported_io_types": { 00:04:52.872 "read": true, 00:04:52.872 "write": true, 00:04:52.872 "unmap": true, 00:04:52.872 "write_zeroes": true, 00:04:52.872 "flush": true, 00:04:52.872 "reset": true, 00:04:52.872 "compare": false, 00:04:52.872 "compare_and_write": false, 00:04:52.872 "abort": true, 00:04:52.872 "nvme_admin": false, 00:04:52.872 "nvme_io": false 00:04:52.872 }, 00:04:52.872 "memory_domains": [ 00:04:52.872 { 00:04:52.872 "dma_device_id": "system", 00:04:52.872 "dma_device_type": 1 00:04:52.872 }, 00:04:52.872 { 00:04:52.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.872 "dma_device_type": 2 00:04:52.872 } 00:04:52.872 ], 00:04:52.872 "driver_specific": {} 00:04:52.872 } 00:04:52.872 ]' 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.872 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.872 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.873 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.873 01:33:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.873 00:04:52.873 real 0m0.112s 00:04:52.873 user 0m0.071s 00:04:52.873 sys 0m0.012s 00:04:52.873 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:52.873 01:33:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 ************************************ 00:04:52.873 END TEST rpc_plugins 00:04:52.873 ************************************ 00:04:52.873 01:33:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:52.873 01:33:16 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:52.873 01:33:16 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:52.873 01:33:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 ************************************ 00:04:52.873 START TEST rpc_trace_cmd_test 00:04:52.873 ************************************ 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:52.873 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3911198", 00:04:52.873 "tpoint_group_mask": "0x8", 00:04:52.873 "iscsi_conn": { 00:04:52.873 "mask": "0x2", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "scsi": { 00:04:52.873 "mask": "0x4", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "bdev": { 00:04:52.873 "mask": "0x8", 00:04:52.873 "tpoint_mask": "0xffffffffffffffff" 00:04:52.873 }, 00:04:52.873 "nvmf_rdma": { 00:04:52.873 "mask": "0x10", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "nvmf_tcp": { 00:04:52.873 "mask": "0x20", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "ftl": { 00:04:52.873 "mask": "0x40", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "blobfs": { 00:04:52.873 "mask": "0x80", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "dsa": { 00:04:52.873 "mask": "0x200", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "thread": { 00:04:52.873 "mask": "0x400", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "nvme_pcie": { 00:04:52.873 "mask": "0x800", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "iaa": { 00:04:52.873 "mask": "0x1000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "nvme_tcp": { 00:04:52.873 "mask": "0x2000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "bdev_nvme": { 00:04:52.873 "mask": "0x4000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 }, 00:04:52.873 "sock": { 00:04:52.873 "mask": "0x8000", 00:04:52.873 "tpoint_mask": "0x0" 00:04:52.873 } 00:04:52.873 }' 00:04:52.873 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:53.131 00:04:53.131 real 0m0.195s 00:04:53.131 user 0m0.177s 00:04:53.131 sys 0m0.011s 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.131 01:33:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:53.131 ************************************ 00:04:53.131 END TEST rpc_trace_cmd_test 00:04:53.131 ************************************ 00:04:53.131 01:33:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:53.131 01:33:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:53.131 01:33:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:53.131 01:33:16 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.131 01:33:16 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.131 01:33:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.131 ************************************ 00:04:53.131 START TEST rpc_daemon_integrity 00:04:53.131 ************************************ 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.131 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.388 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.389 { 00:04:53.389 "name": "Malloc2", 00:04:53.389 "aliases": [ 00:04:53.389 "97067b0f-9b94-42b4-8662-bf50108612da" 00:04:53.389 ], 00:04:53.389 "product_name": "Malloc disk", 00:04:53.389 "block_size": 512, 00:04:53.389 "num_blocks": 16384, 00:04:53.389 "uuid": "97067b0f-9b94-42b4-8662-bf50108612da", 00:04:53.389 "assigned_rate_limits": { 00:04:53.389 "rw_ios_per_sec": 0, 00:04:53.389 "rw_mbytes_per_sec": 0, 00:04:53.389 "r_mbytes_per_sec": 0, 00:04:53.389 "w_mbytes_per_sec": 0 00:04:53.389 }, 00:04:53.389 "claimed": false, 00:04:53.389 "zoned": false, 00:04:53.389 "supported_io_types": { 00:04:53.389 "read": true, 00:04:53.389 "write": true, 00:04:53.389 "unmap": true, 00:04:53.389 "write_zeroes": true, 00:04:53.389 "flush": true, 00:04:53.389 "reset": true, 00:04:53.389 "compare": false, 00:04:53.389 "compare_and_write": false, 00:04:53.389 "abort": true, 00:04:53.389 "nvme_admin": false, 00:04:53.389 "nvme_io": false 00:04:53.389 }, 00:04:53.389 "memory_domains": [ 00:04:53.389 { 00:04:53.389 "dma_device_id": "system", 00:04:53.389 "dma_device_type": 1 00:04:53.389 }, 00:04:53.389 { 00:04:53.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.389 "dma_device_type": 2 00:04:53.389 } 00:04:53.389 ], 00:04:53.389 "driver_specific": {} 00:04:53.389 } 00:04:53.389 ]' 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.389 [2024-05-15 01:33:17.118816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:53.389 [2024-05-15 01:33:17.118863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.389 [2024-05-15 01:33:17.118891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e2c940 00:04:53.389 [2024-05-15 01:33:17.118908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.389 [2024-05-15 01:33:17.120245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.389 [2024-05-15 01:33:17.120289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.389 Passthru0 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.389 { 00:04:53.389 "name": "Malloc2", 00:04:53.389 "aliases": [ 00:04:53.389 "97067b0f-9b94-42b4-8662-bf50108612da" 00:04:53.389 ], 00:04:53.389 "product_name": "Malloc disk", 00:04:53.389 "block_size": 512, 00:04:53.389 "num_blocks": 16384, 00:04:53.389 "uuid": "97067b0f-9b94-42b4-8662-bf50108612da", 00:04:53.389 "assigned_rate_limits": { 00:04:53.389 "rw_ios_per_sec": 0, 00:04:53.389 "rw_mbytes_per_sec": 0, 00:04:53.389 "r_mbytes_per_sec": 0, 00:04:53.389 "w_mbytes_per_sec": 0 00:04:53.389 }, 00:04:53.389 "claimed": true, 00:04:53.389 "claim_type": "exclusive_write", 00:04:53.389 "zoned": false, 00:04:53.389 "supported_io_types": { 00:04:53.389 "read": true, 00:04:53.389 "write": true, 00:04:53.389 "unmap": true, 00:04:53.389 "write_zeroes": true, 00:04:53.389 "flush": true, 00:04:53.389 "reset": true, 00:04:53.389 "compare": false, 00:04:53.389 "compare_and_write": false, 00:04:53.389 "abort": true, 00:04:53.389 "nvme_admin": false, 00:04:53.389 "nvme_io": false 00:04:53.389 }, 00:04:53.389 "memory_domains": [ 00:04:53.389 { 00:04:53.389 "dma_device_id": "system", 00:04:53.389 "dma_device_type": 1 00:04:53.389 }, 00:04:53.389 { 00:04:53.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.389 "dma_device_type": 2 00:04:53.389 } 00:04:53.389 ], 00:04:53.389 "driver_specific": {} 00:04:53.389 }, 00:04:53.389 { 00:04:53.389 "name": "Passthru0", 00:04:53.389 "aliases": [ 00:04:53.389 "6a7cdbfe-ff49-5e5c-a07f-80bac8b7b875" 00:04:53.389 ], 00:04:53.389 "product_name": "passthru", 00:04:53.389 "block_size": 512, 00:04:53.389 "num_blocks": 16384, 00:04:53.389 "uuid": "6a7cdbfe-ff49-5e5c-a07f-80bac8b7b875", 00:04:53.389 "assigned_rate_limits": { 00:04:53.389 "rw_ios_per_sec": 0, 00:04:53.389 "rw_mbytes_per_sec": 0, 00:04:53.389 "r_mbytes_per_sec": 0, 00:04:53.389 "w_mbytes_per_sec": 0 00:04:53.389 }, 00:04:53.389 "claimed": false, 00:04:53.389 "zoned": false, 00:04:53.389 "supported_io_types": { 00:04:53.389 "read": true, 00:04:53.389 "write": true, 00:04:53.389 "unmap": true, 00:04:53.389 "write_zeroes": true, 00:04:53.389 "flush": true, 00:04:53.389 "reset": true, 00:04:53.389 "compare": false, 00:04:53.389 "compare_and_write": false, 00:04:53.389 "abort": true, 00:04:53.389 "nvme_admin": false, 00:04:53.389 "nvme_io": false 00:04:53.389 }, 00:04:53.389 "memory_domains": [ 00:04:53.389 { 00:04:53.389 "dma_device_id": "system", 00:04:53.389 "dma_device_type": 1 00:04:53.389 }, 00:04:53.389 { 00:04:53.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.389 "dma_device_type": 2 00:04:53.389 } 00:04:53.389 ], 00:04:53.389 "driver_specific": { 00:04:53.389 "passthru": { 00:04:53.389 "name": "Passthru0", 00:04:53.389 "base_bdev_name": "Malloc2" 00:04:53.389 } 00:04:53.389 } 00:04:53.389 } 00:04:53.389 ]' 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.389 00:04:53.389 real 0m0.222s 00:04:53.389 user 0m0.150s 00:04:53.389 sys 0m0.021s 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.389 01:33:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.389 ************************************ 00:04:53.389 END TEST rpc_daemon_integrity 00:04:53.389 ************************************ 00:04:53.389 01:33:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.389 01:33:17 rpc -- rpc/rpc.sh@84 -- # killprocess 3911198 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@947 -- # '[' -z 3911198 ']' 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@951 -- # kill -0 3911198 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@952 -- # uname 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3911198 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3911198' 00:04:53.389 killing process with pid 3911198 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@966 -- # kill 3911198 00:04:53.389 01:33:17 rpc -- common/autotest_common.sh@971 -- # wait 3911198 00:04:53.954 00:04:53.954 real 0m1.887s 00:04:53.954 user 0m2.380s 00:04:53.954 sys 0m0.594s 00:04:53.954 01:33:17 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.954 01:33:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.954 ************************************ 00:04:53.954 END TEST rpc 00:04:53.954 ************************************ 00:04:53.954 01:33:17 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.954 01:33:17 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.954 01:33:17 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.954 01:33:17 -- common/autotest_common.sh@10 -- # set +x 00:04:53.954 ************************************ 00:04:53.954 START TEST skip_rpc 00:04:53.954 ************************************ 00:04:53.954 01:33:17 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.954 * Looking for test storage... 00:04:53.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:53.954 01:33:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.954 01:33:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:53.954 01:33:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:53.954 01:33:17 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.954 01:33:17 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.954 01:33:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.954 ************************************ 00:04:53.954 START TEST skip_rpc 00:04:53.954 ************************************ 00:04:53.954 01:33:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:53.954 01:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3911637 00:04:53.954 01:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.954 01:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.954 01:33:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.954 [2024-05-15 01:33:17.865046] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:04:53.954 [2024-05-15 01:33:17.865110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911637 ] 00:04:54.213 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.213 [2024-05-15 01:33:17.935618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.213 [2024-05-15 01:33:18.021972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3911637 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 3911637 ']' 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 3911637 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3911637 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3911637' 00:04:59.474 killing process with pid 3911637 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 3911637 00:04:59.474 01:33:22 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 3911637 00:04:59.474 00:04:59.474 real 0m5.453s 00:04:59.474 user 0m5.125s 00:04:59.474 sys 0m0.333s 00:04:59.474 01:33:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:59.474 01:33:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 ************************************ 00:04:59.474 END TEST skip_rpc 00:04:59.474 ************************************ 00:04:59.474 01:33:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:59.474 01:33:23 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:59.474 01:33:23 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:59.474 01:33:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 ************************************ 00:04:59.474 START TEST skip_rpc_with_json 00:04:59.474 ************************************ 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3912323 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3912323 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 3912323 ']' 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:59.474 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 [2024-05-15 01:33:23.374801] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:04:59.474 [2024-05-15 01:33:23.374877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912323 ] 00:04:59.732 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.732 [2024-05-15 01:33:23.443714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.732 [2024-05-15 01:33:23.529038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.990 [2024-05-15 01:33:23.787717] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.990 request: 00:04:59.990 { 00:04:59.990 "trtype": "tcp", 00:04:59.990 "method": "nvmf_get_transports", 00:04:59.990 "req_id": 1 00:04:59.990 } 00:04:59.990 Got JSON-RPC error response 00:04:59.990 response: 00:04:59.990 { 00:04:59.990 "code": -19, 00:04:59.990 "message": "No such device" 00:04:59.990 } 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.990 [2024-05-15 01:33:23.795839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:59.990 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.248 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.248 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.248 { 00:05:00.248 "subsystems": [ 00:05:00.248 { 00:05:00.248 "subsystem": "vfio_user_target", 00:05:00.248 "config": null 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "subsystem": "keyring", 00:05:00.248 "config": [] 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "subsystem": "iobuf", 00:05:00.248 "config": [ 00:05:00.248 { 00:05:00.248 "method": "iobuf_set_options", 00:05:00.248 "params": { 00:05:00.248 "small_pool_count": 8192, 00:05:00.248 "large_pool_count": 1024, 00:05:00.248 "small_bufsize": 8192, 00:05:00.248 "large_bufsize": 135168 00:05:00.248 } 00:05:00.248 } 00:05:00.248 ] 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "subsystem": "sock", 00:05:00.248 "config": [ 00:05:00.248 { 00:05:00.248 "method": "sock_impl_set_options", 00:05:00.248 "params": { 00:05:00.248 "impl_name": "posix", 00:05:00.248 "recv_buf_size": 2097152, 00:05:00.248 "send_buf_size": 2097152, 00:05:00.248 "enable_recv_pipe": true, 00:05:00.248 "enable_quickack": false, 00:05:00.248 "enable_placement_id": 0, 00:05:00.248 "enable_zerocopy_send_server": true, 00:05:00.248 "enable_zerocopy_send_client": false, 00:05:00.248 "zerocopy_threshold": 0, 00:05:00.248 "tls_version": 0, 00:05:00.248 "enable_ktls": false 00:05:00.248 } 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "method": "sock_impl_set_options", 00:05:00.248 "params": { 00:05:00.248 "impl_name": "ssl", 00:05:00.248 "recv_buf_size": 4096, 00:05:00.248 "send_buf_size": 4096, 00:05:00.248 "enable_recv_pipe": true, 00:05:00.248 "enable_quickack": false, 00:05:00.248 "enable_placement_id": 0, 00:05:00.248 "enable_zerocopy_send_server": true, 00:05:00.248 "enable_zerocopy_send_client": false, 00:05:00.248 "zerocopy_threshold": 0, 00:05:00.248 "tls_version": 0, 00:05:00.248 "enable_ktls": false 00:05:00.248 } 00:05:00.248 } 00:05:00.248 ] 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "subsystem": "vmd", 00:05:00.248 "config": [] 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "subsystem": "accel", 00:05:00.248 "config": [ 00:05:00.248 { 00:05:00.248 "method": "accel_set_options", 00:05:00.248 "params": { 00:05:00.248 "small_cache_size": 128, 00:05:00.248 "large_cache_size": 16, 00:05:00.248 "task_count": 2048, 00:05:00.248 "sequence_count": 2048, 00:05:00.248 "buf_count": 2048 00:05:00.248 } 00:05:00.248 } 00:05:00.248 ] 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "subsystem": "bdev", 00:05:00.248 "config": [ 00:05:00.248 { 00:05:00.248 "method": "bdev_set_options", 00:05:00.248 "params": { 00:05:00.248 "bdev_io_pool_size": 65535, 00:05:00.248 "bdev_io_cache_size": 256, 00:05:00.248 "bdev_auto_examine": true, 00:05:00.248 "iobuf_small_cache_size": 128, 00:05:00.248 "iobuf_large_cache_size": 16 00:05:00.248 } 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "method": "bdev_raid_set_options", 00:05:00.248 "params": { 00:05:00.248 "process_window_size_kb": 1024 00:05:00.248 } 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "method": "bdev_iscsi_set_options", 00:05:00.248 "params": { 00:05:00.248 "timeout_sec": 30 00:05:00.248 } 00:05:00.248 }, 00:05:00.248 { 00:05:00.248 "method": "bdev_nvme_set_options", 00:05:00.248 "params": { 00:05:00.248 "action_on_timeout": "none", 00:05:00.248 "timeout_us": 0, 00:05:00.248 "timeout_admin_us": 0, 00:05:00.248 "keep_alive_timeout_ms": 10000, 00:05:00.248 "arbitration_burst": 0, 00:05:00.248 "low_priority_weight": 0, 00:05:00.248 "medium_priority_weight": 0, 00:05:00.248 "high_priority_weight": 0, 00:05:00.248 "nvme_adminq_poll_period_us": 10000, 00:05:00.248 "nvme_ioq_poll_period_us": 0, 00:05:00.248 "io_queue_requests": 0, 00:05:00.248 "delay_cmd_submit": true, 00:05:00.248 "transport_retry_count": 4, 00:05:00.248 "bdev_retry_count": 3, 00:05:00.248 "transport_ack_timeout": 0, 00:05:00.248 "ctrlr_loss_timeout_sec": 0, 00:05:00.248 "reconnect_delay_sec": 0, 00:05:00.249 "fast_io_fail_timeout_sec": 0, 00:05:00.249 "disable_auto_failback": false, 00:05:00.249 "generate_uuids": false, 00:05:00.249 "transport_tos": 0, 00:05:00.249 "nvme_error_stat": false, 00:05:00.249 "rdma_srq_size": 0, 00:05:00.249 "io_path_stat": false, 00:05:00.249 "allow_accel_sequence": false, 00:05:00.249 "rdma_max_cq_size": 0, 00:05:00.249 "rdma_cm_event_timeout_ms": 0, 00:05:00.249 "dhchap_digests": [ 00:05:00.249 "sha256", 00:05:00.249 "sha384", 00:05:00.249 "sha512" 00:05:00.249 ], 00:05:00.249 "dhchap_dhgroups": [ 00:05:00.249 "null", 00:05:00.249 "ffdhe2048", 00:05:00.249 "ffdhe3072", 00:05:00.249 "ffdhe4096", 00:05:00.249 "ffdhe6144", 00:05:00.249 "ffdhe8192" 00:05:00.249 ] 00:05:00.249 } 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "method": "bdev_nvme_set_hotplug", 00:05:00.249 "params": { 00:05:00.249 "period_us": 100000, 00:05:00.249 "enable": false 00:05:00.249 } 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "method": "bdev_wait_for_examine" 00:05:00.249 } 00:05:00.249 ] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "scsi", 00:05:00.249 "config": null 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "scheduler", 00:05:00.249 "config": [ 00:05:00.249 { 00:05:00.249 "method": "framework_set_scheduler", 00:05:00.249 "params": { 00:05:00.249 "name": "static" 00:05:00.249 } 00:05:00.249 } 00:05:00.249 ] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "vhost_scsi", 00:05:00.249 "config": [] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "vhost_blk", 00:05:00.249 "config": [] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "ublk", 00:05:00.249 "config": [] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "nbd", 00:05:00.249 "config": [] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "nvmf", 00:05:00.249 "config": [ 00:05:00.249 { 00:05:00.249 "method": "nvmf_set_config", 00:05:00.249 "params": { 00:05:00.249 "discovery_filter": "match_any", 00:05:00.249 "admin_cmd_passthru": { 00:05:00.249 "identify_ctrlr": false 00:05:00.249 } 00:05:00.249 } 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "method": "nvmf_set_max_subsystems", 00:05:00.249 "params": { 00:05:00.249 "max_subsystems": 1024 00:05:00.249 } 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "method": "nvmf_set_crdt", 00:05:00.249 "params": { 00:05:00.249 "crdt1": 0, 00:05:00.249 "crdt2": 0, 00:05:00.249 "crdt3": 0 00:05:00.249 } 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "method": "nvmf_create_transport", 00:05:00.249 "params": { 00:05:00.249 "trtype": "TCP", 00:05:00.249 "max_queue_depth": 128, 00:05:00.249 "max_io_qpairs_per_ctrlr": 127, 00:05:00.249 "in_capsule_data_size": 4096, 00:05:00.249 "max_io_size": 131072, 00:05:00.249 "io_unit_size": 131072, 00:05:00.249 "max_aq_depth": 128, 00:05:00.249 "num_shared_buffers": 511, 00:05:00.249 "buf_cache_size": 4294967295, 00:05:00.249 "dif_insert_or_strip": false, 00:05:00.249 "zcopy": false, 00:05:00.249 "c2h_success": true, 00:05:00.249 "sock_priority": 0, 00:05:00.249 "abort_timeout_sec": 1, 00:05:00.249 "ack_timeout": 0, 00:05:00.249 "data_wr_pool_size": 0 00:05:00.249 } 00:05:00.249 } 00:05:00.249 ] 00:05:00.249 }, 00:05:00.249 { 00:05:00.249 "subsystem": "iscsi", 00:05:00.249 "config": [ 00:05:00.249 { 00:05:00.249 "method": "iscsi_set_options", 00:05:00.249 "params": { 00:05:00.249 "node_base": "iqn.2016-06.io.spdk", 00:05:00.249 "max_sessions": 128, 00:05:00.249 "max_connections_per_session": 2, 00:05:00.249 "max_queue_depth": 64, 00:05:00.249 "default_time2wait": 2, 00:05:00.249 "default_time2retain": 20, 00:05:00.249 "first_burst_length": 8192, 00:05:00.249 "immediate_data": true, 00:05:00.249 "allow_duplicated_isid": false, 00:05:00.249 "error_recovery_level": 0, 00:05:00.249 "nop_timeout": 60, 00:05:00.249 "nop_in_interval": 30, 00:05:00.249 "disable_chap": false, 00:05:00.249 "require_chap": false, 00:05:00.249 "mutual_chap": false, 00:05:00.249 "chap_group": 0, 00:05:00.249 "max_large_datain_per_connection": 64, 00:05:00.249 "max_r2t_per_connection": 4, 00:05:00.249 "pdu_pool_size": 36864, 00:05:00.249 "immediate_data_pool_size": 16384, 00:05:00.249 "data_out_pool_size": 2048 00:05:00.249 } 00:05:00.249 } 00:05:00.249 ] 00:05:00.249 } 00:05:00.249 ] 00:05:00.249 } 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3912323 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 3912323 ']' 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 3912323 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3912323 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3912323' 00:05:00.249 killing process with pid 3912323 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 3912323 00:05:00.249 01:33:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 3912323 00:05:00.507 01:33:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3912459 00:05:00.507 01:33:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.507 01:33:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3912459 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 3912459 ']' 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 3912459 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3912459 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3912459' 00:05:05.768 killing process with pid 3912459 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 3912459 00:05:05.768 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 3912459 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.026 00:05:06.026 real 0m6.501s 00:05:06.026 user 0m6.065s 00:05:06.026 sys 0m0.716s 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 ************************************ 00:05:06.026 END TEST skip_rpc_with_json 00:05:06.026 ************************************ 00:05:06.026 01:33:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.026 01:33:29 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:06.026 01:33:29 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:06.026 01:33:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 ************************************ 00:05:06.026 START TEST skip_rpc_with_delay 00:05:06.026 ************************************ 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.026 [2024-05-15 01:33:29.932831] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.026 [2024-05-15 01:33:29.932959] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:06.026 00:05:06.026 real 0m0.072s 00:05:06.026 user 0m0.046s 00:05:06.026 sys 0m0.026s 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:06.026 01:33:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.026 ************************************ 00:05:06.026 END TEST skip_rpc_with_delay 00:05:06.026 ************************************ 00:05:06.284 01:33:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.284 01:33:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.284 01:33:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.284 01:33:29 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:06.284 01:33:29 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:06.284 01:33:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.284 ************************************ 00:05:06.284 START TEST exit_on_failed_rpc_init 00:05:06.284 ************************************ 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3913179 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3913179 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 3913179 ']' 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:06.284 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.284 [2024-05-15 01:33:30.052065] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:06.284 [2024-05-15 01:33:30.052167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913179 ] 00:05:06.284 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.284 [2024-05-15 01:33:30.114997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.284 [2024-05-15 01:33:30.196385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.543 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.803 [2024-05-15 01:33:30.491823] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:06.803 [2024-05-15 01:33:30.491915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913194 ] 00:05:06.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.803 [2024-05-15 01:33:30.557841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.803 [2024-05-15 01:33:30.646706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.803 [2024-05-15 01:33:30.646838] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.803 [2024-05-15 01:33:30.646857] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.803 [2024-05-15 01:33:30.646869] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3913179 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 3913179 ']' 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 3913179 00:05:07.113 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3913179 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3913179' 00:05:07.114 killing process with pid 3913179 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 3913179 00:05:07.114 01:33:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 3913179 00:05:07.371 00:05:07.371 real 0m1.181s 00:05:07.371 user 0m1.276s 00:05:07.371 sys 0m0.467s 00:05:07.371 01:33:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:07.371 01:33:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.371 ************************************ 00:05:07.371 END TEST exit_on_failed_rpc_init 00:05:07.371 ************************************ 00:05:07.371 01:33:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:07.371 00:05:07.371 real 0m13.478s 00:05:07.371 user 0m12.617s 00:05:07.371 sys 0m1.716s 00:05:07.371 01:33:31 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:07.371 01:33:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.371 ************************************ 00:05:07.371 END TEST skip_rpc 00:05:07.371 ************************************ 00:05:07.371 01:33:31 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:07.371 01:33:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:07.371 01:33:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:07.371 01:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:07.371 ************************************ 00:05:07.371 START TEST rpc_client 00:05:07.371 ************************************ 00:05:07.371 01:33:31 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:07.630 * Looking for test storage... 00:05:07.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:07.630 01:33:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:07.630 OK 00:05:07.630 01:33:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.630 00:05:07.630 real 0m0.064s 00:05:07.630 user 0m0.028s 00:05:07.630 sys 0m0.039s 00:05:07.630 01:33:31 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:07.630 01:33:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.630 ************************************ 00:05:07.630 END TEST rpc_client 00:05:07.630 ************************************ 00:05:07.630 01:33:31 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.630 01:33:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:07.630 01:33:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:07.630 01:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:07.630 ************************************ 00:05:07.630 START TEST json_config 00:05:07.630 ************************************ 00:05:07.630 01:33:31 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.630 01:33:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.630 01:33:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.630 01:33:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.630 01:33:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.630 01:33:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.630 01:33:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.630 01:33:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.630 01:33:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@47 -- # : 0 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:07.630 01:33:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:07.630 INFO: JSON configuration test init 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:07.630 01:33:31 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:07.630 01:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:07.630 01:33:31 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:07.630 01:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.630 01:33:31 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.630 01:33:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.630 01:33:31 json_config -- json_config/common.sh@10 -- # shift 00:05:07.630 01:33:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.630 01:33:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.630 01:33:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.630 01:33:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.631 01:33:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.631 01:33:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3913435 00:05:07.631 01:33:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.631 01:33:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.631 Waiting for target to run... 00:05:07.631 01:33:31 json_config -- json_config/common.sh@25 -- # waitforlisten 3913435 /var/tmp/spdk_tgt.sock 00:05:07.631 01:33:31 json_config -- common/autotest_common.sh@828 -- # '[' -z 3913435 ']' 00:05:07.631 01:33:31 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.631 01:33:31 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:07.631 01:33:31 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.631 01:33:31 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:07.631 01:33:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.631 [2024-05-15 01:33:31.475760] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:07.631 [2024-05-15 01:33:31.475845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913435 ] 00:05:07.631 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.196 [2024-05-15 01:33:31.850010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.196 [2024-05-15 01:33:31.909366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.761 01:33:32 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:08.761 01:33:32 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:08.761 01:33:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.761 00:05:08.761 01:33:32 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:08.761 01:33:32 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:08.761 01:33:32 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:08.761 01:33:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.761 01:33:32 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:08.761 01:33:32 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:08.761 01:33:32 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:08.761 01:33:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.762 01:33:32 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.762 01:33:32 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:08.762 01:33:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:12.043 01:33:35 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:12.043 01:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:12.043 01:33:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:12.043 01:33:35 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:12.043 01:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:12.043 01:33:35 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:12.043 01:33:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:12.043 01:33:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.043 01:33:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.300 MallocForNvmf0 00:05:12.300 01:33:36 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.300 01:33:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.558 MallocForNvmf1 00:05:12.558 01:33:36 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:12.558 01:33:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:12.816 [2024-05-15 01:33:36.568853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.816 01:33:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.816 01:33:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:13.073 01:33:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:13.073 01:33:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:13.331 01:33:37 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:13.331 01:33:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:13.589 01:33:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:13.589 01:33:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:13.848 [2024-05-15 01:33:37.535555] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:13.848 [2024-05-15 01:33:37.536198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.848 01:33:37 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:13.848 01:33:37 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:13.848 01:33:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.848 01:33:37 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:13.848 01:33:37 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:13.848 01:33:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.848 01:33:37 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:13.848 01:33:37 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.848 01:33:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:14.105 MallocBdevForConfigChangeCheck 00:05:14.105 01:33:37 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:14.105 01:33:37 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:14.105 01:33:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.105 01:33:37 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:14.105 01:33:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.364 01:33:38 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:14.364 INFO: shutting down applications... 00:05:14.364 01:33:38 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:14.364 01:33:38 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:14.364 01:33:38 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:14.364 01:33:38 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:16.262 Calling clear_iscsi_subsystem 00:05:16.262 Calling clear_nvmf_subsystem 00:05:16.262 Calling clear_nbd_subsystem 00:05:16.262 Calling clear_ublk_subsystem 00:05:16.262 Calling clear_vhost_blk_subsystem 00:05:16.262 Calling clear_vhost_scsi_subsystem 00:05:16.262 Calling clear_bdev_subsystem 00:05:16.262 01:33:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:16.262 01:33:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:16.262 01:33:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:16.262 01:33:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.262 01:33:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.262 01:33:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.518 01:33:40 json_config -- json_config/json_config.sh@345 -- # break 00:05:16.518 01:33:40 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:16.518 01:33:40 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:16.518 01:33:40 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.518 01:33:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.518 01:33:40 json_config -- json_config/common.sh@35 -- # [[ -n 3913435 ]] 00:05:16.518 01:33:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3913435 00:05:16.518 [2024-05-15 01:33:40.226808] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:16.518 01:33:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.518 01:33:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.518 01:33:40 json_config -- json_config/common.sh@41 -- # kill -0 3913435 00:05:16.518 01:33:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.082 01:33:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.082 01:33:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.082 01:33:40 json_config -- json_config/common.sh@41 -- # kill -0 3913435 00:05:17.082 01:33:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.082 01:33:40 json_config -- json_config/common.sh@43 -- # break 00:05:17.082 01:33:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.082 01:33:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.082 SPDK target shutdown done 00:05:17.082 01:33:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:17.082 INFO: relaunching applications... 00:05:17.082 01:33:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.082 01:33:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:17.082 01:33:40 json_config -- json_config/common.sh@10 -- # shift 00:05:17.082 01:33:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.082 01:33:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.082 01:33:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.083 01:33:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.083 01:33:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.083 01:33:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3914628 00:05:17.083 01:33:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.083 01:33:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.083 Waiting for target to run... 00:05:17.083 01:33:40 json_config -- json_config/common.sh@25 -- # waitforlisten 3914628 /var/tmp/spdk_tgt.sock 00:05:17.083 01:33:40 json_config -- common/autotest_common.sh@828 -- # '[' -z 3914628 ']' 00:05:17.083 01:33:40 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.083 01:33:40 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:17.083 01:33:40 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.083 01:33:40 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:17.083 01:33:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.083 [2024-05-15 01:33:40.783180] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:17.083 [2024-05-15 01:33:40.783296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914628 ] 00:05:17.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.341 [2024-05-15 01:33:41.147795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.341 [2024-05-15 01:33:41.207206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.619 [2024-05-15 01:33:44.238186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.619 [2024-05-15 01:33:44.270156] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:20.619 [2024-05-15 01:33:44.270747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.619 01:33:44 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:20.619 01:33:44 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:20.619 01:33:44 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.619 00:05:20.619 01:33:44 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:20.619 01:33:44 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:20.619 INFO: Checking if target configuration is the same... 00:05:20.619 01:33:44 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.619 01:33:44 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:20.619 01:33:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.619 + '[' 2 -ne 2 ']' 00:05:20.619 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.619 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.619 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.619 +++ basename /dev/fd/62 00:05:20.619 ++ mktemp /tmp/62.XXX 00:05:20.619 + tmp_file_1=/tmp/62.1uE 00:05:20.619 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.619 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.619 + tmp_file_2=/tmp/spdk_tgt_config.json.wCj 00:05:20.619 + ret=0 00:05:20.619 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.877 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.877 + diff -u /tmp/62.1uE /tmp/spdk_tgt_config.json.wCj 00:05:20.877 + echo 'INFO: JSON config files are the same' 00:05:20.877 INFO: JSON config files are the same 00:05:20.877 + rm /tmp/62.1uE /tmp/spdk_tgt_config.json.wCj 00:05:20.877 + exit 0 00:05:20.877 01:33:44 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:20.877 01:33:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:20.877 INFO: changing configuration and checking if this can be detected... 00:05:20.877 01:33:44 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.877 01:33:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.134 01:33:44 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.134 01:33:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:21.134 01:33:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.134 + '[' 2 -ne 2 ']' 00:05:21.134 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.134 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.134 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.134 +++ basename /dev/fd/62 00:05:21.134 ++ mktemp /tmp/62.XXX 00:05:21.134 + tmp_file_1=/tmp/62.Ke5 00:05:21.134 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.134 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.134 + tmp_file_2=/tmp/spdk_tgt_config.json.NHF 00:05:21.134 + ret=0 00:05:21.134 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.700 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.700 + diff -u /tmp/62.Ke5 /tmp/spdk_tgt_config.json.NHF 00:05:21.700 + ret=1 00:05:21.700 + echo '=== Start of file: /tmp/62.Ke5 ===' 00:05:21.700 + cat /tmp/62.Ke5 00:05:21.700 + echo '=== End of file: /tmp/62.Ke5 ===' 00:05:21.700 + echo '' 00:05:21.700 + echo '=== Start of file: /tmp/spdk_tgt_config.json.NHF ===' 00:05:21.700 + cat /tmp/spdk_tgt_config.json.NHF 00:05:21.700 + echo '=== End of file: /tmp/spdk_tgt_config.json.NHF ===' 00:05:21.700 + echo '' 00:05:21.700 + rm /tmp/62.Ke5 /tmp/spdk_tgt_config.json.NHF 00:05:21.700 + exit 1 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:21.700 INFO: configuration change detected. 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@317 -- # [[ -n 3914628 ]] 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.700 01:33:45 json_config -- json_config/json_config.sh@323 -- # killprocess 3914628 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@947 -- # '[' -z 3914628 ']' 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@951 -- # kill -0 3914628 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@952 -- # uname 00:05:21.700 01:33:45 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:21.701 01:33:45 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3914628 00:05:21.701 01:33:45 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:21.701 01:33:45 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:21.701 01:33:45 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3914628' 00:05:21.701 killing process with pid 3914628 00:05:21.701 01:33:45 json_config -- common/autotest_common.sh@966 -- # kill 3914628 00:05:21.701 [2024-05-15 01:33:45.488712] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:21.701 01:33:45 json_config -- common/autotest_common.sh@971 -- # wait 3914628 00:05:23.598 01:33:47 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.598 01:33:47 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:23.598 01:33:47 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:23.598 01:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.598 01:33:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:23.598 01:33:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:23.598 INFO: Success 00:05:23.598 00:05:23.598 real 0m15.682s 00:05:23.598 user 0m17.556s 00:05:23.598 sys 0m1.882s 00:05:23.598 01:33:47 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.598 01:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.598 ************************************ 00:05:23.598 END TEST json_config 00:05:23.598 ************************************ 00:05:23.598 01:33:47 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.598 01:33:47 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:23.598 01:33:47 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:23.598 01:33:47 -- common/autotest_common.sh@10 -- # set +x 00:05:23.598 ************************************ 00:05:23.598 START TEST json_config_extra_key 00:05:23.598 ************************************ 00:05:23.598 01:33:47 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.598 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.598 01:33:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.598 01:33:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.598 01:33:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.598 01:33:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.598 01:33:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.598 01:33:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.598 01:33:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.598 01:33:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.598 01:33:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.599 01:33:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.599 01:33:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.599 INFO: launching applications... 00:05:23.599 01:33:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3915536 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.599 Waiting for target to run... 00:05:23.599 01:33:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3915536 /var/tmp/spdk_tgt.sock 00:05:23.599 01:33:47 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 3915536 ']' 00:05:23.599 01:33:47 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.599 01:33:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:23.599 01:33:47 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.599 01:33:47 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:23.599 01:33:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.599 [2024-05-15 01:33:47.200162] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:23.599 [2024-05-15 01:33:47.200279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915536 ] 00:05:23.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.857 [2024-05-15 01:33:47.544070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.857 [2024-05-15 01:33:47.603699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.420 01:33:48 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:24.420 01:33:48 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.420 00:05:24.420 01:33:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.420 INFO: shutting down applications... 00:05:24.420 01:33:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3915536 ]] 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3915536 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3915536 00:05:24.420 01:33:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3915536 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.985 01:33:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.985 SPDK target shutdown done 00:05:24.985 01:33:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:24.985 Success 00:05:24.985 00:05:24.985 real 0m1.540s 00:05:24.985 user 0m1.484s 00:05:24.985 sys 0m0.431s 00:05:24.985 01:33:48 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.985 01:33:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.985 ************************************ 00:05:24.985 END TEST json_config_extra_key 00:05:24.985 ************************************ 00:05:24.985 01:33:48 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.985 01:33:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:24.985 01:33:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.985 01:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.985 ************************************ 00:05:24.985 START TEST alias_rpc 00:05:24.985 ************************************ 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.985 * Looking for test storage... 00:05:24.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.985 01:33:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.985 01:33:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3915723 00:05:24.985 01:33:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.985 01:33:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3915723 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 3915723 ']' 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:24.985 01:33:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.985 [2024-05-15 01:33:48.802990] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:24.985 [2024-05-15 01:33:48.803091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915723 ] 00:05:24.985 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.985 [2024-05-15 01:33:48.871485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.243 [2024-05-15 01:33:48.952185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.501 01:33:49 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:25.501 01:33:49 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:25.501 01:33:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.758 01:33:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3915723 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 3915723 ']' 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 3915723 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3915723 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3915723' 00:05:25.758 killing process with pid 3915723 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@966 -- # kill 3915723 00:05:25.758 01:33:49 alias_rpc -- common/autotest_common.sh@971 -- # wait 3915723 00:05:26.015 00:05:26.015 real 0m1.187s 00:05:26.015 user 0m1.247s 00:05:26.015 sys 0m0.435s 00:05:26.015 01:33:49 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.015 01:33:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.015 ************************************ 00:05:26.015 END TEST alias_rpc 00:05:26.015 ************************************ 00:05:26.015 01:33:49 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:26.015 01:33:49 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.015 01:33:49 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:26.015 01:33:49 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.015 01:33:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.015 ************************************ 00:05:26.015 START TEST spdkcli_tcp 00:05:26.015 ************************************ 00:05:26.015 01:33:49 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.273 * Looking for test storage... 00:05:26.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.273 01:33:49 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:26.273 01:33:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3916031 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.273 01:33:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3916031 00:05:26.273 01:33:49 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 3916031 ']' 00:05:26.273 01:33:49 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.273 01:33:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:26.273 01:33:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.273 01:33:50 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:26.273 01:33:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.273 [2024-05-15 01:33:50.050885] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:26.273 [2024-05-15 01:33:50.051006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916031 ] 00:05:26.273 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.273 [2024-05-15 01:33:50.118933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.273 [2024-05-15 01:33:50.201185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.273 [2024-05-15 01:33:50.201189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.569 01:33:50 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:26.569 01:33:50 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:05:26.569 01:33:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3916040 00:05:26.569 01:33:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.569 01:33:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.827 [ 00:05:26.827 "bdev_malloc_delete", 00:05:26.827 "bdev_malloc_create", 00:05:26.827 "bdev_null_resize", 00:05:26.827 "bdev_null_delete", 00:05:26.827 "bdev_null_create", 00:05:26.828 "bdev_nvme_cuse_unregister", 00:05:26.828 "bdev_nvme_cuse_register", 00:05:26.828 "bdev_opal_new_user", 00:05:26.828 "bdev_opal_set_lock_state", 00:05:26.828 "bdev_opal_delete", 00:05:26.828 "bdev_opal_get_info", 00:05:26.828 "bdev_opal_create", 00:05:26.828 "bdev_nvme_opal_revert", 00:05:26.828 "bdev_nvme_opal_init", 00:05:26.828 "bdev_nvme_send_cmd", 00:05:26.828 "bdev_nvme_get_path_iostat", 00:05:26.828 "bdev_nvme_get_mdns_discovery_info", 00:05:26.828 "bdev_nvme_stop_mdns_discovery", 00:05:26.828 "bdev_nvme_start_mdns_discovery", 00:05:26.828 "bdev_nvme_set_multipath_policy", 00:05:26.828 "bdev_nvme_set_preferred_path", 00:05:26.828 "bdev_nvme_get_io_paths", 00:05:26.828 "bdev_nvme_remove_error_injection", 00:05:26.828 "bdev_nvme_add_error_injection", 00:05:26.828 "bdev_nvme_get_discovery_info", 00:05:26.828 "bdev_nvme_stop_discovery", 00:05:26.828 "bdev_nvme_start_discovery", 00:05:26.828 "bdev_nvme_get_controller_health_info", 00:05:26.828 "bdev_nvme_disable_controller", 00:05:26.828 "bdev_nvme_enable_controller", 00:05:26.828 "bdev_nvme_reset_controller", 00:05:26.828 "bdev_nvme_get_transport_statistics", 00:05:26.828 "bdev_nvme_apply_firmware", 00:05:26.828 "bdev_nvme_detach_controller", 00:05:26.828 "bdev_nvme_get_controllers", 00:05:26.828 "bdev_nvme_attach_controller", 00:05:26.828 "bdev_nvme_set_hotplug", 00:05:26.828 "bdev_nvme_set_options", 00:05:26.828 "bdev_passthru_delete", 00:05:26.828 "bdev_passthru_create", 00:05:26.828 "bdev_lvol_check_shallow_copy", 00:05:26.828 "bdev_lvol_start_shallow_copy", 00:05:26.828 "bdev_lvol_grow_lvstore", 00:05:26.828 "bdev_lvol_get_lvols", 00:05:26.828 "bdev_lvol_get_lvstores", 00:05:26.828 "bdev_lvol_delete", 00:05:26.828 "bdev_lvol_set_read_only", 00:05:26.828 "bdev_lvol_resize", 00:05:26.828 "bdev_lvol_decouple_parent", 00:05:26.828 "bdev_lvol_inflate", 00:05:26.828 "bdev_lvol_rename", 00:05:26.828 "bdev_lvol_clone_bdev", 00:05:26.828 "bdev_lvol_clone", 00:05:26.828 "bdev_lvol_snapshot", 00:05:26.828 "bdev_lvol_create", 00:05:26.828 "bdev_lvol_delete_lvstore", 00:05:26.828 "bdev_lvol_rename_lvstore", 00:05:26.828 "bdev_lvol_create_lvstore", 00:05:26.828 "bdev_raid_set_options", 00:05:26.828 "bdev_raid_remove_base_bdev", 00:05:26.828 "bdev_raid_add_base_bdev", 00:05:26.828 "bdev_raid_delete", 00:05:26.828 "bdev_raid_create", 00:05:26.828 "bdev_raid_get_bdevs", 00:05:26.828 "bdev_error_inject_error", 00:05:26.828 "bdev_error_delete", 00:05:26.828 "bdev_error_create", 00:05:26.828 "bdev_split_delete", 00:05:26.828 "bdev_split_create", 00:05:26.828 "bdev_delay_delete", 00:05:26.828 "bdev_delay_create", 00:05:26.828 "bdev_delay_update_latency", 00:05:26.828 "bdev_zone_block_delete", 00:05:26.828 "bdev_zone_block_create", 00:05:26.828 "blobfs_create", 00:05:26.828 "blobfs_detect", 00:05:26.828 "blobfs_set_cache_size", 00:05:26.828 "bdev_aio_delete", 00:05:26.828 "bdev_aio_rescan", 00:05:26.828 "bdev_aio_create", 00:05:26.828 "bdev_ftl_set_property", 00:05:26.828 "bdev_ftl_get_properties", 00:05:26.828 "bdev_ftl_get_stats", 00:05:26.828 "bdev_ftl_unmap", 00:05:26.828 "bdev_ftl_unload", 00:05:26.828 "bdev_ftl_delete", 00:05:26.828 "bdev_ftl_load", 00:05:26.828 "bdev_ftl_create", 00:05:26.828 "bdev_virtio_attach_controller", 00:05:26.828 "bdev_virtio_scsi_get_devices", 00:05:26.828 "bdev_virtio_detach_controller", 00:05:26.828 "bdev_virtio_blk_set_hotplug", 00:05:26.828 "bdev_iscsi_delete", 00:05:26.828 "bdev_iscsi_create", 00:05:26.828 "bdev_iscsi_set_options", 00:05:26.828 "accel_error_inject_error", 00:05:26.828 "ioat_scan_accel_module", 00:05:26.828 "dsa_scan_accel_module", 00:05:26.828 "iaa_scan_accel_module", 00:05:26.828 "vfu_virtio_create_scsi_endpoint", 00:05:26.828 "vfu_virtio_scsi_remove_target", 00:05:26.828 "vfu_virtio_scsi_add_target", 00:05:26.828 "vfu_virtio_create_blk_endpoint", 00:05:26.828 "vfu_virtio_delete_endpoint", 00:05:26.828 "keyring_file_remove_key", 00:05:26.828 "keyring_file_add_key", 00:05:26.828 "iscsi_get_histogram", 00:05:26.828 "iscsi_enable_histogram", 00:05:26.828 "iscsi_set_options", 00:05:26.828 "iscsi_get_auth_groups", 00:05:26.828 "iscsi_auth_group_remove_secret", 00:05:26.828 "iscsi_auth_group_add_secret", 00:05:26.828 "iscsi_delete_auth_group", 00:05:26.828 "iscsi_create_auth_group", 00:05:26.828 "iscsi_set_discovery_auth", 00:05:26.828 "iscsi_get_options", 00:05:26.828 "iscsi_target_node_request_logout", 00:05:26.828 "iscsi_target_node_set_redirect", 00:05:26.828 "iscsi_target_node_set_auth", 00:05:26.828 "iscsi_target_node_add_lun", 00:05:26.828 "iscsi_get_stats", 00:05:26.828 "iscsi_get_connections", 00:05:26.828 "iscsi_portal_group_set_auth", 00:05:26.828 "iscsi_start_portal_group", 00:05:26.828 "iscsi_delete_portal_group", 00:05:26.828 "iscsi_create_portal_group", 00:05:26.828 "iscsi_get_portal_groups", 00:05:26.828 "iscsi_delete_target_node", 00:05:26.828 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.828 "iscsi_target_node_add_pg_ig_maps", 00:05:26.828 "iscsi_create_target_node", 00:05:26.828 "iscsi_get_target_nodes", 00:05:26.828 "iscsi_delete_initiator_group", 00:05:26.828 "iscsi_initiator_group_remove_initiators", 00:05:26.828 "iscsi_initiator_group_add_initiators", 00:05:26.828 "iscsi_create_initiator_group", 00:05:26.828 "iscsi_get_initiator_groups", 00:05:26.828 "nvmf_set_crdt", 00:05:26.828 "nvmf_set_config", 00:05:26.828 "nvmf_set_max_subsystems", 00:05:26.828 "nvmf_stop_mdns_prr", 00:05:26.828 "nvmf_publish_mdns_prr", 00:05:26.828 "nvmf_subsystem_get_listeners", 00:05:26.828 "nvmf_subsystem_get_qpairs", 00:05:26.828 "nvmf_subsystem_get_controllers", 00:05:26.828 "nvmf_get_stats", 00:05:26.828 "nvmf_get_transports", 00:05:26.828 "nvmf_create_transport", 00:05:26.828 "nvmf_get_targets", 00:05:26.828 "nvmf_delete_target", 00:05:26.828 "nvmf_create_target", 00:05:26.828 "nvmf_subsystem_allow_any_host", 00:05:26.828 "nvmf_subsystem_remove_host", 00:05:26.828 "nvmf_subsystem_add_host", 00:05:26.828 "nvmf_ns_remove_host", 00:05:26.828 "nvmf_ns_add_host", 00:05:26.828 "nvmf_subsystem_remove_ns", 00:05:26.828 "nvmf_subsystem_add_ns", 00:05:26.828 "nvmf_subsystem_listener_set_ana_state", 00:05:26.828 "nvmf_discovery_get_referrals", 00:05:26.828 "nvmf_discovery_remove_referral", 00:05:26.828 "nvmf_discovery_add_referral", 00:05:26.828 "nvmf_subsystem_remove_listener", 00:05:26.828 "nvmf_subsystem_add_listener", 00:05:26.828 "nvmf_delete_subsystem", 00:05:26.828 "nvmf_create_subsystem", 00:05:26.828 "nvmf_get_subsystems", 00:05:26.828 "env_dpdk_get_mem_stats", 00:05:26.828 "nbd_get_disks", 00:05:26.828 "nbd_stop_disk", 00:05:26.828 "nbd_start_disk", 00:05:26.828 "ublk_recover_disk", 00:05:26.828 "ublk_get_disks", 00:05:26.828 "ublk_stop_disk", 00:05:26.828 "ublk_start_disk", 00:05:26.828 "ublk_destroy_target", 00:05:26.828 "ublk_create_target", 00:05:26.828 "virtio_blk_create_transport", 00:05:26.828 "virtio_blk_get_transports", 00:05:26.828 "vhost_controller_set_coalescing", 00:05:26.828 "vhost_get_controllers", 00:05:26.828 "vhost_delete_controller", 00:05:26.828 "vhost_create_blk_controller", 00:05:26.828 "vhost_scsi_controller_remove_target", 00:05:26.828 "vhost_scsi_controller_add_target", 00:05:26.828 "vhost_start_scsi_controller", 00:05:26.828 "vhost_create_scsi_controller", 00:05:26.828 "thread_set_cpumask", 00:05:26.828 "framework_get_scheduler", 00:05:26.828 "framework_set_scheduler", 00:05:26.828 "framework_get_reactors", 00:05:26.828 "thread_get_io_channels", 00:05:26.828 "thread_get_pollers", 00:05:26.828 "thread_get_stats", 00:05:26.828 "framework_monitor_context_switch", 00:05:26.828 "spdk_kill_instance", 00:05:26.828 "log_enable_timestamps", 00:05:26.828 "log_get_flags", 00:05:26.828 "log_clear_flag", 00:05:26.828 "log_set_flag", 00:05:26.828 "log_get_level", 00:05:26.828 "log_set_level", 00:05:26.828 "log_get_print_level", 00:05:26.828 "log_set_print_level", 00:05:26.828 "framework_enable_cpumask_locks", 00:05:26.828 "framework_disable_cpumask_locks", 00:05:26.828 "framework_wait_init", 00:05:26.828 "framework_start_init", 00:05:26.828 "scsi_get_devices", 00:05:26.828 "bdev_get_histogram", 00:05:26.828 "bdev_enable_histogram", 00:05:26.828 "bdev_set_qos_limit", 00:05:26.828 "bdev_set_qd_sampling_period", 00:05:26.828 "bdev_get_bdevs", 00:05:26.828 "bdev_reset_iostat", 00:05:26.828 "bdev_get_iostat", 00:05:26.828 "bdev_examine", 00:05:26.828 "bdev_wait_for_examine", 00:05:26.828 "bdev_set_options", 00:05:26.828 "notify_get_notifications", 00:05:26.828 "notify_get_types", 00:05:26.828 "accel_get_stats", 00:05:26.828 "accel_set_options", 00:05:26.828 "accel_set_driver", 00:05:26.828 "accel_crypto_key_destroy", 00:05:26.828 "accel_crypto_keys_get", 00:05:26.828 "accel_crypto_key_create", 00:05:26.828 "accel_assign_opc", 00:05:26.828 "accel_get_module_info", 00:05:26.828 "accel_get_opc_assignments", 00:05:26.828 "vmd_rescan", 00:05:26.828 "vmd_remove_device", 00:05:26.828 "vmd_enable", 00:05:26.828 "sock_get_default_impl", 00:05:26.828 "sock_set_default_impl", 00:05:26.828 "sock_impl_set_options", 00:05:26.828 "sock_impl_get_options", 00:05:26.828 "iobuf_get_stats", 00:05:26.828 "iobuf_set_options", 00:05:26.829 "keyring_get_keys", 00:05:26.829 "framework_get_pci_devices", 00:05:26.829 "framework_get_config", 00:05:26.829 "framework_get_subsystems", 00:05:26.829 "vfu_tgt_set_base_path", 00:05:26.829 "trace_get_info", 00:05:26.829 "trace_get_tpoint_group_mask", 00:05:26.829 "trace_disable_tpoint_group", 00:05:26.829 "trace_enable_tpoint_group", 00:05:26.829 "trace_clear_tpoint_mask", 00:05:26.829 "trace_set_tpoint_mask", 00:05:26.829 "spdk_get_version", 00:05:26.829 "rpc_get_methods" 00:05:26.829 ] 00:05:26.829 01:33:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.829 01:33:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.829 01:33:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3916031 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 3916031 ']' 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 3916031 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3916031 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3916031' 00:05:26.829 killing process with pid 3916031 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 3916031 00:05:26.829 01:33:50 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 3916031 00:05:27.393 00:05:27.393 real 0m1.208s 00:05:27.393 user 0m2.135s 00:05:27.393 sys 0m0.442s 00:05:27.393 01:33:51 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:27.393 01:33:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.393 ************************************ 00:05:27.393 END TEST spdkcli_tcp 00:05:27.393 ************************************ 00:05:27.393 01:33:51 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.393 01:33:51 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:27.393 01:33:51 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:27.393 01:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:27.393 ************************************ 00:05:27.393 START TEST dpdk_mem_utility 00:05:27.393 ************************************ 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.393 * Looking for test storage... 00:05:27.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:27.393 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.393 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3916233 00:05:27.393 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.393 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3916233 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 3916233 ']' 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:27.393 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.393 [2024-05-15 01:33:51.304750] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:27.393 [2024-05-15 01:33:51.304842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916233 ] 00:05:27.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.650 [2024-05-15 01:33:51.372655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.650 [2024-05-15 01:33:51.453530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.908 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:27.908 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:05:27.908 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.908 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.908 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.908 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.908 { 00:05:27.908 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.908 } 00:05:27.908 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.908 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.908 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.908 1 heaps totaling size 814.000000 MiB 00:05:27.908 size: 814.000000 MiB heap id: 0 00:05:27.908 end heaps---------- 00:05:27.908 8 mempools totaling size 598.116089 MiB 00:05:27.908 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.908 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.908 size: 84.521057 MiB name: bdev_io_3916233 00:05:27.908 size: 51.011292 MiB name: evtpool_3916233 00:05:27.908 size: 50.003479 MiB name: msgpool_3916233 00:05:27.908 size: 21.763794 MiB name: PDU_Pool 00:05:27.908 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.908 size: 0.026123 MiB name: Session_Pool 00:05:27.908 end mempools------- 00:05:27.908 6 memzones totaling size 4.142822 MiB 00:05:27.908 size: 1.000366 MiB name: RG_ring_0_3916233 00:05:27.908 size: 1.000366 MiB name: RG_ring_1_3916233 00:05:27.908 size: 1.000366 MiB name: RG_ring_4_3916233 00:05:27.908 size: 1.000366 MiB name: RG_ring_5_3916233 00:05:27.908 size: 0.125366 MiB name: RG_ring_2_3916233 00:05:27.908 size: 0.015991 MiB name: RG_ring_3_3916233 00:05:27.908 end memzones------- 00:05:27.908 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.908 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.908 list of free elements. size: 12.519348 MiB 00:05:27.908 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.908 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.908 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.908 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.908 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.908 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.908 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.908 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.908 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.908 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.908 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.908 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.908 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.908 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.908 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.908 list of standard malloc elements. size: 199.218079 MiB 00:05:27.908 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.908 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.908 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.908 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.908 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.908 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.908 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.908 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.908 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.908 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.908 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.908 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.908 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.909 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.909 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.909 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.909 list of memzone associated elements. size: 602.262573 MiB 00:05:27.909 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.909 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.909 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.909 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.909 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.909 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3916233_0 00:05:27.909 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.909 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3916233_0 00:05:27.909 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.909 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3916233_0 00:05:27.909 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.909 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.909 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.909 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.909 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.909 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3916233 00:05:27.909 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.909 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3916233 00:05:27.909 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.909 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3916233 00:05:27.909 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.909 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.909 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.909 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.909 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.909 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.909 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.909 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3916233 00:05:27.909 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.909 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3916233 00:05:27.909 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.909 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3916233 00:05:27.909 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.909 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3916233 00:05:27.909 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.909 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3916233 00:05:27.909 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.909 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.909 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.909 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.909 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.909 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.909 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.909 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3916233 00:05:27.909 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.909 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.909 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.909 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.909 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.909 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3916233 00:05:27.909 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.909 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.909 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.909 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3916233 00:05:27.909 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.909 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3916233 00:05:27.909 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.909 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.909 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.909 01:33:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3916233 00:05:27.909 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 3916233 ']' 00:05:27.909 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 3916233 00:05:27.909 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:05:27.909 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:27.909 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3916233 00:05:28.166 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:28.166 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:28.166 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3916233' 00:05:28.166 killing process with pid 3916233 00:05:28.166 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 3916233 00:05:28.166 01:33:51 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 3916233 00:05:28.422 00:05:28.422 real 0m1.036s 00:05:28.422 user 0m0.986s 00:05:28.422 sys 0m0.414s 00:05:28.422 01:33:52 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:28.422 01:33:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.422 ************************************ 00:05:28.422 END TEST dpdk_mem_utility 00:05:28.422 ************************************ 00:05:28.422 01:33:52 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.422 01:33:52 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:28.422 01:33:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.422 01:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:28.422 ************************************ 00:05:28.422 START TEST event 00:05:28.422 ************************************ 00:05:28.422 01:33:52 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.422 * Looking for test storage... 00:05:28.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.422 01:33:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.422 01:33:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.422 01:33:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.422 01:33:52 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:28.422 01:33:52 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.422 01:33:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.678 ************************************ 00:05:28.678 START TEST event_perf 00:05:28.678 ************************************ 00:05:28.678 01:33:52 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.678 Running I/O for 1 seconds...[2024-05-15 01:33:52.387716] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:28.678 [2024-05-15 01:33:52.387784] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916424 ] 00:05:28.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.678 [2024-05-15 01:33:52.464139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.678 [2024-05-15 01:33:52.559242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.678 [2024-05-15 01:33:52.559289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.678 [2024-05-15 01:33:52.559362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.678 [2024-05-15 01:33:52.559365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.046 Running I/O for 1 seconds... 00:05:30.046 lcore 0: 230477 00:05:30.046 lcore 1: 230478 00:05:30.046 lcore 2: 230477 00:05:30.046 lcore 3: 230476 00:05:30.046 done. 00:05:30.046 00:05:30.046 real 0m1.269s 00:05:30.046 user 0m4.162s 00:05:30.046 sys 0m0.098s 00:05:30.046 01:33:53 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:30.046 01:33:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.046 ************************************ 00:05:30.046 END TEST event_perf 00:05:30.046 ************************************ 00:05:30.046 01:33:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.046 01:33:53 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:30.046 01:33:53 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:30.046 01:33:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.046 ************************************ 00:05:30.046 START TEST event_reactor 00:05:30.046 ************************************ 00:05:30.046 01:33:53 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.046 [2024-05-15 01:33:53.710015] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:30.046 [2024-05-15 01:33:53.710078] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916584 ] 00:05:30.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.046 [2024-05-15 01:33:53.778998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.046 [2024-05-15 01:33:53.868158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.417 test_start 00:05:31.417 oneshot 00:05:31.417 tick 100 00:05:31.417 tick 100 00:05:31.417 tick 250 00:05:31.417 tick 100 00:05:31.417 tick 100 00:05:31.417 tick 100 00:05:31.417 tick 250 00:05:31.417 tick 500 00:05:31.417 tick 100 00:05:31.417 tick 100 00:05:31.417 tick 250 00:05:31.417 tick 100 00:05:31.417 tick 100 00:05:31.417 test_end 00:05:31.417 00:05:31.417 real 0m1.252s 00:05:31.417 user 0m1.155s 00:05:31.417 sys 0m0.092s 00:05:31.417 01:33:54 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:31.417 01:33:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.417 ************************************ 00:05:31.417 END TEST event_reactor 00:05:31.417 ************************************ 00:05:31.417 01:33:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.417 01:33:54 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:31.417 01:33:54 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:31.417 01:33:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.417 ************************************ 00:05:31.417 START TEST event_reactor_perf 00:05:31.417 ************************************ 00:05:31.417 01:33:54 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.417 [2024-05-15 01:33:55.009361] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:31.417 [2024-05-15 01:33:55.009418] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916738 ] 00:05:31.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.417 [2024-05-15 01:33:55.084198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.417 [2024-05-15 01:33:55.173126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.349 test_start 00:05:32.349 test_end 00:05:32.349 Performance: 353282 events per second 00:05:32.349 00:05:32.349 real 0m1.257s 00:05:32.349 user 0m1.158s 00:05:32.349 sys 0m0.094s 00:05:32.349 01:33:56 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:32.349 01:33:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 ************************************ 00:05:32.349 END TEST event_reactor_perf 00:05:32.349 ************************************ 00:05:32.349 01:33:56 event -- event/event.sh@49 -- # uname -s 00:05:32.608 01:33:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.608 01:33:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.608 01:33:56 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:32.608 01:33:56 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:32.608 01:33:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 ************************************ 00:05:32.608 START TEST event_scheduler 00:05:32.608 ************************************ 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.608 * Looking for test storage... 00:05:32.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.608 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.608 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3916924 00:05:32.608 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.608 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.608 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3916924 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 3916924 ']' 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:32.608 01:33:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.608 [2024-05-15 01:33:56.407819] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:32.608 [2024-05-15 01:33:56.407895] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916924 ] 00:05:32.608 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.608 [2024-05-15 01:33:56.474363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.866 [2024-05-15 01:33:56.558693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.866 [2024-05-15 01:33:56.558753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.866 [2024-05-15 01:33:56.558819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.866 [2024-05-15 01:33:56.558822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:05:32.866 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.866 POWER: Env isn't set yet! 00:05:32.866 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:32.866 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:32.866 POWER: Cannot get available frequencies of lcore 0 00:05:32.866 POWER: Attempting to initialise PSTAT power management... 00:05:32.866 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:32.866 POWER: Initialized successfully for lcore 0 power management 00:05:32.866 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:32.866 POWER: Initialized successfully for lcore 1 power management 00:05:32.866 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:32.866 POWER: Initialized successfully for lcore 2 power management 00:05:32.866 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:32.866 POWER: Initialized successfully for lcore 3 power management 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:32.866 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.866 [2024-05-15 01:33:56.757691] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:32.866 01:33:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:32.866 01:33:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.866 ************************************ 00:05:32.866 START TEST scheduler_create_thread 00:05:32.866 ************************************ 00:05:32.866 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:05:32.866 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.866 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:32.866 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 2 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 3 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 4 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 5 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 6 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 7 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 8 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 9 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 10 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.124 01:33:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.688 01:33:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.688 01:33:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.688 01:33:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.688 01:33:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.060 01:33:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.060 01:33:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.060 01:33:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.060 01:33:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.060 01:33:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.992 01:33:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.992 00:05:35.992 real 0m3.099s 00:05:35.992 user 0m0.010s 00:05:35.992 sys 0m0.004s 00:05:35.992 01:33:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.992 01:33:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.992 ************************************ 00:05:35.992 END TEST scheduler_create_thread 00:05:35.992 ************************************ 00:05:35.992 01:33:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.992 01:33:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3916924 00:05:35.992 01:33:59 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 3916924 ']' 00:05:35.992 01:33:59 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 3916924 00:05:35.992 01:33:59 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:05:35.992 01:33:59 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:35.992 01:33:59 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3916924 00:05:36.250 01:33:59 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:36.250 01:33:59 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:36.250 01:33:59 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3916924' 00:05:36.250 killing process with pid 3916924 00:05:36.250 01:33:59 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 3916924 00:05:36.250 01:33:59 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 3916924 00:05:36.508 [2024-05-15 01:34:00.273841] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.508 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:36.508 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:36.508 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:36.508 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:36.508 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:36.508 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:36.508 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:36.508 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:36.766 00:05:36.766 real 0m4.203s 00:05:36.766 user 0m6.839s 00:05:36.766 sys 0m0.349s 00:05:36.766 01:34:00 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:36.766 01:34:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.766 ************************************ 00:05:36.766 END TEST event_scheduler 00:05:36.767 ************************************ 00:05:36.767 01:34:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.767 01:34:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.767 01:34:00 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:36.767 01:34:00 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:36.767 01:34:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.767 ************************************ 00:05:36.767 START TEST app_repeat 00:05:36.767 ************************************ 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3917502 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3917502' 00:05:36.767 Process app_repeat pid: 3917502 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.767 spdk_app_start Round 0 00:05:36.767 01:34:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3917502 /var/tmp/spdk-nbd.sock 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3917502 ']' 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:36.767 01:34:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.767 [2024-05-15 01:34:00.604512] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:36.767 [2024-05-15 01:34:00.604584] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917502 ] 00:05:36.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.767 [2024-05-15 01:34:00.677573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.025 [2024-05-15 01:34:00.763369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.025 [2024-05-15 01:34:00.763374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.025 01:34:00 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:37.026 01:34:00 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:37.026 01:34:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.284 Malloc0 00:05:37.284 01:34:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.542 Malloc1 00:05:37.542 01:34:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.542 01:34:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.800 /dev/nbd0 00:05:37.800 01:34:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.800 01:34:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.800 1+0 records in 00:05:37.800 1+0 records out 00:05:37.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181559 s, 22.6 MB/s 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:37.800 01:34:01 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:37.800 01:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.800 01:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.800 01:34:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.057 /dev/nbd1 00:05:38.057 01:34:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.057 01:34:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:38.057 01:34:01 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.057 1+0 records in 00:05:38.057 1+0 records out 00:05:38.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202587 s, 20.2 MB/s 00:05:38.058 01:34:01 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.058 01:34:01 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:38.058 01:34:01 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.058 01:34:01 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:38.058 01:34:01 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:38.058 01:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.058 01:34:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.058 01:34:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.058 01:34:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.058 01:34:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.315 { 00:05:38.315 "nbd_device": "/dev/nbd0", 00:05:38.315 "bdev_name": "Malloc0" 00:05:38.315 }, 00:05:38.315 { 00:05:38.315 "nbd_device": "/dev/nbd1", 00:05:38.315 "bdev_name": "Malloc1" 00:05:38.315 } 00:05:38.315 ]' 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.315 { 00:05:38.315 "nbd_device": "/dev/nbd0", 00:05:38.315 "bdev_name": "Malloc0" 00:05:38.315 }, 00:05:38.315 { 00:05:38.315 "nbd_device": "/dev/nbd1", 00:05:38.315 "bdev_name": "Malloc1" 00:05:38.315 } 00:05:38.315 ]' 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.315 /dev/nbd1' 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.315 /dev/nbd1' 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.315 01:34:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.316 256+0 records in 00:05:38.316 256+0 records out 00:05:38.316 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494114 s, 212 MB/s 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.316 01:34:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.574 256+0 records in 00:05:38.574 256+0 records out 00:05:38.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238774 s, 43.9 MB/s 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.574 256+0 records in 00:05:38.574 256+0 records out 00:05:38.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224755 s, 46.7 MB/s 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.574 01:34:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.832 01:34:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.090 01:34:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.348 01:34:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.348 01:34:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.606 01:34:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.863 [2024-05-15 01:34:03.646119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.863 [2024-05-15 01:34:03.730559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.863 [2024-05-15 01:34:03.730559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.863 [2024-05-15 01:34:03.785926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.863 [2024-05-15 01:34:03.786001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.147 01:34:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.147 01:34:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.147 spdk_app_start Round 1 00:05:43.147 01:34:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3917502 /var/tmp/spdk-nbd.sock 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3917502 ']' 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:43.147 01:34:06 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:43.147 01:34:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.147 Malloc0 00:05:43.147 01:34:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.405 Malloc1 00:05:43.405 01:34:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.405 01:34:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.664 /dev/nbd0 00:05:43.664 01:34:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.664 01:34:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.664 1+0 records in 00:05:43.664 1+0 records out 00:05:43.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194472 s, 21.1 MB/s 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:43.664 01:34:07 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:43.664 01:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.664 01:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.664 01:34:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.950 /dev/nbd1 00:05:43.950 01:34:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.950 01:34:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.950 1+0 records in 00:05:43.950 1+0 records out 00:05:43.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199883 s, 20.5 MB/s 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:43.950 01:34:07 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.951 01:34:07 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:43.951 01:34:07 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:43.951 01:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.951 01:34:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.951 01:34:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.951 01:34:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.951 01:34:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.217 { 00:05:44.217 "nbd_device": "/dev/nbd0", 00:05:44.217 "bdev_name": "Malloc0" 00:05:44.217 }, 00:05:44.217 { 00:05:44.217 "nbd_device": "/dev/nbd1", 00:05:44.217 "bdev_name": "Malloc1" 00:05:44.217 } 00:05:44.217 ]' 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.217 { 00:05:44.217 "nbd_device": "/dev/nbd0", 00:05:44.217 "bdev_name": "Malloc0" 00:05:44.217 }, 00:05:44.217 { 00:05:44.217 "nbd_device": "/dev/nbd1", 00:05:44.217 "bdev_name": "Malloc1" 00:05:44.217 } 00:05:44.217 ]' 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.217 /dev/nbd1' 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.217 /dev/nbd1' 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.217 01:34:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.217 256+0 records in 00:05:44.217 256+0 records out 00:05:44.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524704 s, 200 MB/s 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.217 256+0 records in 00:05:44.217 256+0 records out 00:05:44.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208078 s, 50.4 MB/s 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.217 256+0 records in 00:05:44.217 256+0 records out 00:05:44.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258649 s, 40.5 MB/s 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.217 01:34:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.475 01:34:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.732 01:34:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.733 01:34:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.733 01:34:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.990 01:34:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.990 01:34:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.247 01:34:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.505 [2024-05-15 01:34:09.328041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.505 [2024-05-15 01:34:09.412134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.505 [2024-05-15 01:34:09.412139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.762 [2024-05-15 01:34:09.471692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.762 [2024-05-15 01:34:09.471777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.286 01:34:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.286 01:34:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.286 spdk_app_start Round 2 00:05:48.286 01:34:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3917502 /var/tmp/spdk-nbd.sock 00:05:48.286 01:34:12 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3917502 ']' 00:05:48.286 01:34:12 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.286 01:34:12 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:48.286 01:34:12 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.286 01:34:12 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:48.286 01:34:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.544 01:34:12 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:48.544 01:34:12 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:48.544 01:34:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.801 Malloc0 00:05:48.801 01:34:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.059 Malloc1 00:05:49.059 01:34:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.059 01:34:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.317 /dev/nbd0 00:05:49.317 01:34:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.317 01:34:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.317 1+0 records in 00:05:49.317 1+0 records out 00:05:49.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206486 s, 19.8 MB/s 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:49.317 01:34:13 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:49.317 01:34:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.317 01:34:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.317 01:34:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.575 /dev/nbd1 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.575 1+0 records in 00:05:49.575 1+0 records out 00:05:49.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197678 s, 20.7 MB/s 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:49.575 01:34:13 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.575 01:34:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.833 { 00:05:49.833 "nbd_device": "/dev/nbd0", 00:05:49.833 "bdev_name": "Malloc0" 00:05:49.833 }, 00:05:49.833 { 00:05:49.833 "nbd_device": "/dev/nbd1", 00:05:49.833 "bdev_name": "Malloc1" 00:05:49.833 } 00:05:49.833 ]' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.833 { 00:05:49.833 "nbd_device": "/dev/nbd0", 00:05:49.833 "bdev_name": "Malloc0" 00:05:49.833 }, 00:05:49.833 { 00:05:49.833 "nbd_device": "/dev/nbd1", 00:05:49.833 "bdev_name": "Malloc1" 00:05:49.833 } 00:05:49.833 ]' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.833 /dev/nbd1' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.833 /dev/nbd1' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.833 256+0 records in 00:05:49.833 256+0 records out 00:05:49.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416295 s, 252 MB/s 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.833 256+0 records in 00:05:49.833 256+0 records out 00:05:49.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221402 s, 47.4 MB/s 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.833 01:34:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.091 256+0 records in 00:05:50.091 256+0 records out 00:05:50.091 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254224 s, 41.2 MB/s 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.091 01:34:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.348 01:34:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.349 01:34:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.349 01:34:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.606 01:34:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.606 01:34:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.606 01:34:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.606 01:34:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.606 01:34:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.606 01:34:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.863 01:34:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.863 01:34:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.121 01:34:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.378 [2024-05-15 01:34:15.053660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.378 [2024-05-15 01:34:15.140984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.378 [2024-05-15 01:34:15.140988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.378 [2024-05-15 01:34:15.203787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.378 [2024-05-15 01:34:15.203874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.904 01:34:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3917502 /var/tmp/spdk-nbd.sock 00:05:53.904 01:34:17 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 3917502 ']' 00:05:53.904 01:34:17 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.904 01:34:17 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:53.904 01:34:17 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.904 01:34:17 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:53.904 01:34:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:54.162 01:34:18 event.app_repeat -- event/event.sh@39 -- # killprocess 3917502 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 3917502 ']' 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 3917502 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:54.162 01:34:18 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3917502 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3917502' 00:05:54.419 killing process with pid 3917502 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@966 -- # kill 3917502 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@971 -- # wait 3917502 00:05:54.419 spdk_app_start is called in Round 0. 00:05:54.419 Shutdown signal received, stop current app iteration 00:05:54.419 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 reinitialization... 00:05:54.419 spdk_app_start is called in Round 1. 00:05:54.419 Shutdown signal received, stop current app iteration 00:05:54.419 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 reinitialization... 00:05:54.419 spdk_app_start is called in Round 2. 00:05:54.419 Shutdown signal received, stop current app iteration 00:05:54.419 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 reinitialization... 00:05:54.419 spdk_app_start is called in Round 3. 00:05:54.419 Shutdown signal received, stop current app iteration 00:05:54.419 01:34:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.419 01:34:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.419 00:05:54.419 real 0m17.715s 00:05:54.419 user 0m38.896s 00:05:54.419 sys 0m3.376s 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:54.419 01:34:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.419 ************************************ 00:05:54.419 END TEST app_repeat 00:05:54.419 ************************************ 00:05:54.419 01:34:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.419 01:34:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.419 01:34:18 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:54.419 01:34:18 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:54.419 01:34:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.419 ************************************ 00:05:54.419 START TEST cpu_locks 00:05:54.419 ************************************ 00:05:54.419 01:34:18 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.677 * Looking for test storage... 00:05:54.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.677 01:34:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.677 01:34:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.677 01:34:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.677 01:34:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.677 01:34:18 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:54.677 01:34:18 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:54.677 01:34:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.677 ************************************ 00:05:54.677 START TEST default_locks 00:05:54.677 ************************************ 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3919853 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3919853 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 3919853 ']' 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:54.677 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.677 [2024-05-15 01:34:18.463131] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:54.677 [2024-05-15 01:34:18.463232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919853 ] 00:05:54.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.677 [2024-05-15 01:34:18.532904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.935 [2024-05-15 01:34:18.621306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.193 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:55.193 01:34:18 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:55.193 01:34:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3919853 00:05:55.193 01:34:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3919853 00:05:55.193 01:34:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.450 lslocks: write error 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3919853 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 3919853 ']' 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 3919853 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3919853 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3919853' 00:05:55.451 killing process with pid 3919853 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 3919853 00:05:55.451 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 3919853 00:05:56.016 01:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3919853 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3919853 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3919853 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 3919853 ']' 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (3919853) - No such process 00:05:56.017 ERROR: process (pid: 3919853) is no longer running 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.017 00:05:56.017 real 0m1.351s 00:05:56.017 user 0m1.299s 00:05:56.017 sys 0m0.591s 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:56.017 01:34:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.017 ************************************ 00:05:56.017 END TEST default_locks 00:05:56.017 ************************************ 00:05:56.017 01:34:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.017 01:34:19 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:56.017 01:34:19 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:56.017 01:34:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.017 ************************************ 00:05:56.017 START TEST default_locks_via_rpc 00:05:56.017 ************************************ 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3920018 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3920018 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3920018 ']' 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:56.017 01:34:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.017 [2024-05-15 01:34:19.875771] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:56.017 [2024-05-15 01:34:19.875848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920018 ] 00:05:56.017 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.017 [2024-05-15 01:34:19.947016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.275 [2024-05-15 01:34:20.034322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3920018 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3920018 00:05:56.534 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3920018 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 3920018 ']' 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 3920018 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3920018 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3920018' 00:05:56.792 killing process with pid 3920018 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 3920018 00:05:56.792 01:34:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 3920018 00:05:57.358 00:05:57.358 real 0m1.193s 00:05:57.358 user 0m1.107s 00:05:57.358 sys 0m0.561s 00:05:57.358 01:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:57.358 01:34:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.358 ************************************ 00:05:57.358 END TEST default_locks_via_rpc 00:05:57.358 ************************************ 00:05:57.358 01:34:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.358 01:34:21 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:57.358 01:34:21 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:57.358 01:34:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.358 ************************************ 00:05:57.358 START TEST non_locking_app_on_locked_coremask 00:05:57.358 ************************************ 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3920192 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3920192 /var/tmp/spdk.sock 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3920192 ']' 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:57.358 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.358 [2024-05-15 01:34:21.129103] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:57.358 [2024-05-15 01:34:21.129184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920192 ] 00:05:57.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.358 [2024-05-15 01:34:21.199688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.358 [2024-05-15 01:34:21.283319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.615 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:57.615 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:57.615 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3920216 00:05:57.615 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3920216 /var/tmp/spdk2.sock 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3920216 ']' 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:57.616 01:34:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.874 [2024-05-15 01:34:21.595568] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:05:57.874 [2024-05-15 01:34:21.595644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920216 ] 00:05:57.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.874 [2024-05-15 01:34:21.711805] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.874 [2024-05-15 01:34:21.711854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.132 [2024-05-15 01:34:21.892419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.697 01:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:58.697 01:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:58.697 01:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3920192 00:05:58.697 01:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3920192 00:05:58.697 01:34:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.262 lslocks: write error 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3920192 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3920192 ']' 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 3920192 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3920192 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3920192' 00:05:59.262 killing process with pid 3920192 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 3920192 00:05:59.262 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 3920192 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3920216 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3920216 ']' 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 3920216 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3920216 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3920216' 00:06:00.194 killing process with pid 3920216 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 3920216 00:06:00.194 01:34:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 3920216 00:06:00.450 00:06:00.450 real 0m3.255s 00:06:00.450 user 0m3.387s 00:06:00.450 sys 0m1.111s 00:06:00.450 01:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:00.450 01:34:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.450 ************************************ 00:06:00.450 END TEST non_locking_app_on_locked_coremask 00:06:00.450 ************************************ 00:06:00.450 01:34:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.450 01:34:24 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:00.450 01:34:24 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:00.450 01:34:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.707 ************************************ 00:06:00.708 START TEST locking_app_on_unlocked_coremask 00:06:00.708 ************************************ 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3920632 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3920632 /var/tmp/spdk.sock 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3920632 ']' 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:00.708 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.708 [2024-05-15 01:34:24.439389] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:00.708 [2024-05-15 01:34:24.439493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920632 ] 00:06:00.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.708 [2024-05-15 01:34:24.505254] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.708 [2024-05-15 01:34:24.505297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.708 [2024-05-15 01:34:24.590295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3920635 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3920635 /var/tmp/spdk2.sock 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3920635 ']' 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:01.115 01:34:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.115 [2024-05-15 01:34:24.890133] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:01.115 [2024-05-15 01:34:24.890228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920635 ] 00:06:01.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.115 [2024-05-15 01:34:25.000769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.403 [2024-05-15 01:34:25.177382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.967 01:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:01.967 01:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:01.967 01:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3920635 00:06:01.967 01:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.967 01:34:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3920635 00:06:02.532 lslocks: write error 00:06:02.532 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3920632 00:06:02.532 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3920632 ']' 00:06:02.532 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 3920632 00:06:02.532 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:02.532 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:02.532 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3920632 00:06:02.788 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:02.788 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:02.788 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3920632' 00:06:02.788 killing process with pid 3920632 00:06:02.788 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 3920632 00:06:02.788 01:34:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 3920632 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3920635 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3920635 ']' 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 3920635 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3920635 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3920635' 00:06:03.718 killing process with pid 3920635 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 3920635 00:06:03.718 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 3920635 00:06:03.976 00:06:03.976 real 0m3.330s 00:06:03.976 user 0m3.450s 00:06:03.976 sys 0m1.119s 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.976 ************************************ 00:06:03.976 END TEST locking_app_on_unlocked_coremask 00:06:03.976 ************************************ 00:06:03.976 01:34:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.976 01:34:27 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:03.976 01:34:27 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.976 01:34:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.976 ************************************ 00:06:03.976 START TEST locking_app_on_locked_coremask 00:06:03.976 ************************************ 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3921068 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3921068 /var/tmp/spdk.sock 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3921068 ']' 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:03.976 01:34:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.976 [2024-05-15 01:34:27.824959] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:03.976 [2024-05-15 01:34:27.825040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921068 ] 00:06:03.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.976 [2024-05-15 01:34:27.895341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.233 [2024-05-15 01:34:27.979273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3921073 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3921073 /var/tmp/spdk2.sock 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3921073 /var/tmp/spdk2.sock 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3921073 /var/tmp/spdk2.sock 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 3921073 ']' 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:04.491 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.491 [2024-05-15 01:34:28.288650] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:04.491 [2024-05-15 01:34:28.288729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921073 ] 00:06:04.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.491 [2024-05-15 01:34:28.399993] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3921068 has claimed it. 00:06:04.491 [2024-05-15 01:34:28.400063] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (3921073) - No such process 00:06:05.421 ERROR: process (pid: 3921073) is no longer running 00:06:05.421 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:05.421 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:06:05.421 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:05.421 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.421 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:05.421 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.422 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3921068 00:06:05.422 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3921068 00:06:05.422 01:34:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.678 lslocks: write error 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3921068 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 3921068 ']' 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 3921068 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3921068 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3921068' 00:06:05.678 killing process with pid 3921068 00:06:05.678 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 3921068 00:06:05.679 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 3921068 00:06:06.242 00:06:06.242 real 0m2.107s 00:06:06.242 user 0m2.244s 00:06:06.242 sys 0m0.688s 00:06:06.242 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:06.242 01:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.242 ************************************ 00:06:06.243 END TEST locking_app_on_locked_coremask 00:06:06.243 ************************************ 00:06:06.243 01:34:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.243 01:34:29 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:06.243 01:34:29 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:06.243 01:34:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.243 ************************************ 00:06:06.243 START TEST locking_overlapped_coremask 00:06:06.243 ************************************ 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3921366 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3921366 /var/tmp/spdk.sock 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 3921366 ']' 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:06.243 01:34:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.243 [2024-05-15 01:34:29.991315] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:06.243 [2024-05-15 01:34:29.991395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921366 ] 00:06:06.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.243 [2024-05-15 01:34:30.067326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.243 [2024-05-15 01:34:30.156419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.243 [2024-05-15 01:34:30.156471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.243 [2024-05-15 01:34:30.156488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.499 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3921372 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3921372 /var/tmp/spdk2.sock 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3921372 /var/tmp/spdk2.sock 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3921372 /var/tmp/spdk2.sock 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 3921372 ']' 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:06.500 01:34:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.756 [2024-05-15 01:34:30.468493] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:06.756 [2024-05-15 01:34:30.468582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921372 ] 00:06:06.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.756 [2024-05-15 01:34:30.570375] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3921366 has claimed it. 00:06:06.756 [2024-05-15 01:34:30.570433] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (3921372) - No such process 00:06:07.319 ERROR: process (pid: 3921372) is no longer running 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3921366 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 3921366 ']' 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 3921366 00:06:07.319 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3921366 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3921366' 00:06:07.320 killing process with pid 3921366 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 3921366 00:06:07.320 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 3921366 00:06:07.884 00:06:07.884 real 0m1.663s 00:06:07.884 user 0m4.486s 00:06:07.884 sys 0m0.463s 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.884 ************************************ 00:06:07.884 END TEST locking_overlapped_coremask 00:06:07.884 ************************************ 00:06:07.884 01:34:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.884 01:34:31 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:07.884 01:34:31 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.884 01:34:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.884 ************************************ 00:06:07.884 START TEST locking_overlapped_coremask_via_rpc 00:06:07.884 ************************************ 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3921546 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3921546 /var/tmp/spdk.sock 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3921546 ']' 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:07.884 01:34:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.884 [2024-05-15 01:34:31.704917] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:07.884 [2024-05-15 01:34:31.704982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921546 ] 00:06:07.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.884 [2024-05-15 01:34:31.785137] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.884 [2024-05-15 01:34:31.785190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.142 [2024-05-15 01:34:31.879285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.142 [2024-05-15 01:34:31.879311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.142 [2024-05-15 01:34:31.879314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.399 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:08.399 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:08.399 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3921672 00:06:08.399 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3921672 /var/tmp/spdk2.sock 00:06:08.399 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.399 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3921672 ']' 00:06:08.400 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.400 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:08.400 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.400 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:08.400 01:34:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.400 [2024-05-15 01:34:32.160055] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:08.400 [2024-05-15 01:34:32.160135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921672 ] 00:06:08.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.400 [2024-05-15 01:34:32.259116] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.400 [2024-05-15 01:34:32.259156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.657 [2024-05-15 01:34:32.428917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.657 [2024-05-15 01:34:32.432267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.657 [2024-05-15 01:34:32.432270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.223 [2024-05-15 01:34:33.104321] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3921546 has claimed it. 00:06:09.223 request: 00:06:09.223 { 00:06:09.223 "method": "framework_enable_cpumask_locks", 00:06:09.223 "req_id": 1 00:06:09.223 } 00:06:09.223 Got JSON-RPC error response 00:06:09.223 response: 00:06:09.223 { 00:06:09.223 "code": -32603, 00:06:09.223 "message": "Failed to claim CPU core: 2" 00:06:09.223 } 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3921546 /var/tmp/spdk.sock 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3921546 ']' 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:09.223 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3921672 /var/tmp/spdk2.sock 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 3921672 ']' 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:09.480 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.741 00:06:09.741 real 0m1.942s 00:06:09.741 user 0m1.055s 00:06:09.741 sys 0m0.162s 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:09.741 01:34:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.741 ************************************ 00:06:09.741 END TEST locking_overlapped_coremask_via_rpc 00:06:09.741 ************************************ 00:06:09.741 01:34:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.741 01:34:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3921546 ]] 00:06:09.742 01:34:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3921546 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3921546 ']' 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3921546 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3921546 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3921546' 00:06:09.742 killing process with pid 3921546 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 3921546 00:06:09.742 01:34:33 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 3921546 00:06:10.308 01:34:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3921672 ]] 00:06:10.308 01:34:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3921672 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3921672 ']' 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3921672 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3921672 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3921672' 00:06:10.308 killing process with pid 3921672 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 3921672 00:06:10.308 01:34:34 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 3921672 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3921546 ]] 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3921546 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3921546 ']' 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3921546 00:06:10.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3921546) - No such process 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 3921546 is not found' 00:06:10.566 Process with pid 3921546 is not found 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3921672 ]] 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3921672 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 3921672 ']' 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 3921672 00:06:10.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3921672) - No such process 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 3921672 is not found' 00:06:10.566 Process with pid 3921672 is not found 00:06:10.566 01:34:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.566 00:06:10.566 real 0m16.127s 00:06:10.566 user 0m27.581s 00:06:10.566 sys 0m5.604s 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.566 01:34:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 ************************************ 00:06:10.566 END TEST cpu_locks 00:06:10.566 ************************************ 00:06:10.566 00:06:10.566 real 0m42.199s 00:06:10.566 user 1m19.934s 00:06:10.566 sys 0m9.856s 00:06:10.566 01:34:34 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.566 01:34:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 ************************************ 00:06:10.566 END TEST event 00:06:10.566 ************************************ 00:06:10.825 01:34:34 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.825 01:34:34 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:10.825 01:34:34 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:10.825 01:34:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.825 ************************************ 00:06:10.825 START TEST thread 00:06:10.825 ************************************ 00:06:10.825 01:34:34 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.825 * Looking for test storage... 00:06:10.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:10.825 01:34:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.825 01:34:34 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:10.825 01:34:34 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:10.825 01:34:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.825 ************************************ 00:06:10.825 START TEST thread_poller_perf 00:06:10.825 ************************************ 00:06:10.825 01:34:34 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.825 [2024-05-15 01:34:34.647665] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:10.825 [2024-05-15 01:34:34.647731] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922041 ] 00:06:10.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.825 [2024-05-15 01:34:34.717145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.083 [2024-05-15 01:34:34.804378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.083 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.016 ====================================== 00:06:12.016 busy:2710535315 (cyc) 00:06:12.016 total_run_count: 291000 00:06:12.017 tsc_hz: 2700000000 (cyc) 00:06:12.017 ====================================== 00:06:12.017 poller_cost: 9314 (cyc), 3449 (nsec) 00:06:12.017 00:06:12.017 real 0m1.259s 00:06:12.017 user 0m1.171s 00:06:12.017 sys 0m0.083s 00:06:12.017 01:34:35 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.017 01:34:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.017 ************************************ 00:06:12.017 END TEST thread_poller_perf 00:06:12.017 ************************************ 00:06:12.017 01:34:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.017 01:34:35 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:12.017 01:34:35 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:12.017 01:34:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.017 ************************************ 00:06:12.017 START TEST thread_poller_perf 00:06:12.017 ************************************ 00:06:12.017 01:34:35 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.274 [2024-05-15 01:34:35.960193] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:12.274 [2024-05-15 01:34:35.960283] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922193 ] 00:06:12.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.274 [2024-05-15 01:34:36.030997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.274 [2024-05-15 01:34:36.121798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.274 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.647 ====================================== 00:06:13.647 busy:2702869981 (cyc) 00:06:13.647 total_run_count: 3936000 00:06:13.647 tsc_hz: 2700000000 (cyc) 00:06:13.647 ====================================== 00:06:13.647 poller_cost: 686 (cyc), 254 (nsec) 00:06:13.647 00:06:13.647 real 0m1.256s 00:06:13.647 user 0m1.154s 00:06:13.647 sys 0m0.096s 00:06:13.647 01:34:37 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:13.647 01:34:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.647 ************************************ 00:06:13.647 END TEST thread_poller_perf 00:06:13.647 ************************************ 00:06:13.647 01:34:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.647 00:06:13.647 real 0m2.680s 00:06:13.647 user 0m2.383s 00:06:13.647 sys 0m0.293s 00:06:13.647 01:34:37 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:13.647 01:34:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.647 ************************************ 00:06:13.647 END TEST thread 00:06:13.647 ************************************ 00:06:13.647 01:34:37 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.647 01:34:37 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:13.647 01:34:37 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:13.647 01:34:37 -- common/autotest_common.sh@10 -- # set +x 00:06:13.647 ************************************ 00:06:13.647 START TEST accel 00:06:13.647 ************************************ 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.647 * Looking for test storage... 00:06:13.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:13.647 01:34:37 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:13.647 01:34:37 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:13.647 01:34:37 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.647 01:34:37 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3922390 00:06:13.647 01:34:37 accel -- accel/accel.sh@63 -- # waitforlisten 3922390 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@828 -- # '[' -z 3922390 ']' 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.647 01:34:37 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.647 01:34:37 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.647 01:34:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:13.647 01:34:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.647 01:34:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.647 01:34:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.647 01:34:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.647 01:34:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.647 01:34:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.647 01:34:37 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.647 [2024-05-15 01:34:37.378844] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:13.647 [2024-05-15 01:34:37.378953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922390 ] 00:06:13.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.647 [2024-05-15 01:34:37.445779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.647 [2024-05-15 01:34:37.528371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@861 -- # return 0 00:06:13.905 01:34:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:13.905 01:34:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:13.905 01:34:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:13.905 01:34:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:13.905 01:34:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:13.905 01:34:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.905 01:34:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:13.905 01:34:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:13.905 01:34:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:13.905 01:34:37 accel -- accel/accel.sh@75 -- # killprocess 3922390 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@947 -- # '[' -z 3922390 ']' 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@951 -- # kill -0 3922390 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@952 -- # uname 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:13.905 01:34:37 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3922390 00:06:14.163 01:34:37 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:14.163 01:34:37 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:14.163 01:34:37 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3922390' 00:06:14.163 killing process with pid 3922390 00:06:14.163 01:34:37 accel -- common/autotest_common.sh@966 -- # kill 3922390 00:06:14.163 01:34:37 accel -- common/autotest_common.sh@971 -- # wait 3922390 00:06:14.422 01:34:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:14.422 01:34:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:14.422 01:34:38 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:14.422 01:34:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.422 01:34:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.422 01:34:38 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:14.422 01:34:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:14.422 01:34:38 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.422 01:34:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:14.422 01:34:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:14.422 01:34:38 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:14.422 01:34:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.422 01:34:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.680 ************************************ 00:06:14.680 START TEST accel_missing_filename 00:06:14.680 ************************************ 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.680 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:14.680 01:34:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:14.680 [2024-05-15 01:34:38.372902] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:14.680 [2024-05-15 01:34:38.372966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922554 ] 00:06:14.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.680 [2024-05-15 01:34:38.445819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.680 [2024-05-15 01:34:38.538401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.680 [2024-05-15 01:34:38.598433] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.938 [2024-05-15 01:34:38.686192] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:14.938 A filename is required. 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.938 00:06:14.938 real 0m0.412s 00:06:14.938 user 0m0.289s 00:06:14.938 sys 0m0.156s 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.938 01:34:38 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:14.938 ************************************ 00:06:14.938 END TEST accel_missing_filename 00:06:14.938 ************************************ 00:06:14.938 01:34:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.938 01:34:38 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:14.938 01:34:38 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.938 01:34:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.938 ************************************ 00:06:14.938 START TEST accel_compress_verify 00:06:14.938 ************************************ 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.938 01:34:38 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:14.938 01:34:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:14.938 [2024-05-15 01:34:38.832957] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:14.938 [2024-05-15 01:34:38.833024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922704 ] 00:06:14.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.196 [2024-05-15 01:34:38.902836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.196 [2024-05-15 01:34:38.991983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.196 [2024-05-15 01:34:39.054037] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.455 [2024-05-15 01:34:39.142396] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:15.455 00:06:15.455 Compression does not support the verify option, aborting. 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.455 00:06:15.455 real 0m0.409s 00:06:15.455 user 0m0.289s 00:06:15.455 sys 0m0.150s 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.455 01:34:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:15.455 ************************************ 00:06:15.455 END TEST accel_compress_verify 00:06:15.455 ************************************ 00:06:15.455 01:34:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.455 01:34:39 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:15.455 01:34:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.455 01:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.455 ************************************ 00:06:15.455 START TEST accel_wrong_workload 00:06:15.455 ************************************ 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.455 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:15.455 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:15.455 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:15.455 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.455 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.455 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.456 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.456 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.456 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:15.456 01:34:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:15.456 Unsupported workload type: foobar 00:06:15.456 [2024-05-15 01:34:39.291941] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:15.456 accel_perf options: 00:06:15.456 [-h help message] 00:06:15.456 [-q queue depth per core] 00:06:15.456 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.456 [-T number of threads per core 00:06:15.456 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.456 [-t time in seconds] 00:06:15.456 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.456 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.456 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.456 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.456 [-S for crc32c workload, use this seed value (default 0) 00:06:15.456 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.456 [-f for fill workload, use this BYTE value (default 255) 00:06:15.456 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.456 [-y verify result if this switch is on] 00:06:15.456 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.456 Can be used to spread operations across a wider range of memory. 00:06:15.456 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:15.456 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.456 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.456 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.456 00:06:15.456 real 0m0.020s 00:06:15.456 user 0m0.011s 00:06:15.456 sys 0m0.009s 00:06:15.456 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.456 01:34:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:15.456 ************************************ 00:06:15.456 END TEST accel_wrong_workload 00:06:15.456 ************************************ 00:06:15.456 Error: writing output failed: Broken pipe 00:06:15.456 01:34:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.456 01:34:39 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:15.456 01:34:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.456 01:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.456 ************************************ 00:06:15.456 START TEST accel_negative_buffers 00:06:15.456 ************************************ 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:15.456 01:34:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:15.456 -x option must be non-negative. 00:06:15.456 [2024-05-15 01:34:39.362843] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:15.456 accel_perf options: 00:06:15.456 [-h help message] 00:06:15.456 [-q queue depth per core] 00:06:15.456 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.456 [-T number of threads per core 00:06:15.456 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.456 [-t time in seconds] 00:06:15.456 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.456 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.456 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.456 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.456 [-S for crc32c workload, use this seed value (default 0) 00:06:15.456 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.456 [-f for fill workload, use this BYTE value (default 255) 00:06:15.456 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.456 [-y verify result if this switch is on] 00:06:15.456 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.456 Can be used to spread operations across a wider range of memory. 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.456 00:06:15.456 real 0m0.023s 00:06:15.456 user 0m0.014s 00:06:15.456 sys 0m0.009s 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.456 01:34:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:15.456 ************************************ 00:06:15.456 END TEST accel_negative_buffers 00:06:15.456 ************************************ 00:06:15.456 Error: writing output failed: Broken pipe 00:06:15.715 01:34:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:15.715 01:34:39 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:15.715 01:34:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.715 01:34:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.715 ************************************ 00:06:15.715 START TEST accel_crc32c 00:06:15.715 ************************************ 00:06:15.715 01:34:39 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:15.715 01:34:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:15.715 [2024-05-15 01:34:39.430554] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:15.715 [2024-05-15 01:34:39.430621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922768 ] 00:06:15.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.715 [2024-05-15 01:34:39.499821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.715 [2024-05-15 01:34:39.590318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.973 01:34:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:16.907 01:34:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.907 00:06:16.907 real 0m1.401s 00:06:16.907 user 0m1.248s 00:06:16.907 sys 0m0.155s 00:06:16.908 01:34:40 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:16.908 01:34:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:16.908 ************************************ 00:06:16.908 END TEST accel_crc32c 00:06:16.908 ************************************ 00:06:17.167 01:34:40 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.167 01:34:40 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:17.167 01:34:40 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:17.167 01:34:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.167 ************************************ 00:06:17.167 START TEST accel_crc32c_C2 00:06:17.167 ************************************ 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.167 01:34:40 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:17.167 [2024-05-15 01:34:40.882924] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:17.167 [2024-05-15 01:34:40.882994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923042 ] 00:06:17.167 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.167 [2024-05-15 01:34:40.952358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.167 [2024-05-15 01:34:41.043567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.457 01:34:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.390 00:06:18.390 real 0m1.406s 00:06:18.390 user 0m1.264s 00:06:18.390 sys 0m0.145s 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:18.390 01:34:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:18.390 ************************************ 00:06:18.390 END TEST accel_crc32c_C2 00:06:18.390 ************************************ 00:06:18.390 01:34:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:18.390 01:34:42 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:18.390 01:34:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:18.390 01:34:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.648 ************************************ 00:06:18.648 START TEST accel_copy 00:06:18.648 ************************************ 00:06:18.648 01:34:42 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:18.648 [2024-05-15 01:34:42.339389] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:18.648 [2024-05-15 01:34:42.339452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923203 ] 00:06:18.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.648 [2024-05-15 01:34:42.409745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.648 [2024-05-15 01:34:42.499261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.648 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.649 01:34:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.020 01:34:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:20.021 01:34:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.021 00:06:20.021 real 0m1.408s 00:06:20.021 user 0m1.258s 00:06:20.021 sys 0m0.152s 00:06:20.021 01:34:43 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:20.021 01:34:43 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.021 ************************************ 00:06:20.021 END TEST accel_copy 00:06:20.021 ************************************ 00:06:20.021 01:34:43 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.021 01:34:43 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:20.021 01:34:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:20.021 01:34:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.021 ************************************ 00:06:20.021 START TEST accel_fill 00:06:20.021 ************************************ 00:06:20.021 01:34:43 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:20.021 01:34:43 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:20.021 [2024-05-15 01:34:43.798167] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:20.021 [2024-05-15 01:34:43.798276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923364 ] 00:06:20.021 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.021 [2024-05-15 01:34:43.867722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.279 [2024-05-15 01:34:43.958187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.279 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.280 01:34:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:21.652 01:34:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.652 00:06:21.652 real 0m1.416s 00:06:21.652 user 0m1.271s 00:06:21.652 sys 0m0.147s 00:06:21.652 01:34:45 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:21.652 01:34:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:21.652 ************************************ 00:06:21.652 END TEST accel_fill 00:06:21.652 ************************************ 00:06:21.652 01:34:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.652 01:34:45 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:21.652 01:34:45 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:21.652 01:34:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.652 ************************************ 00:06:21.652 START TEST accel_copy_crc32c 00:06:21.652 ************************************ 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:21.652 [2024-05-15 01:34:45.263082] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:21.652 [2024-05-15 01:34:45.263144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923515 ] 00:06:21.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.652 [2024-05-15 01:34:45.334851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.652 [2024-05-15 01:34:45.428371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.652 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.653 01:34:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.024 00:06:23.024 real 0m1.410s 00:06:23.024 user 0m1.257s 00:06:23.024 sys 0m0.156s 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:23.024 01:34:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:23.024 ************************************ 00:06:23.024 END TEST accel_copy_crc32c 00:06:23.024 ************************************ 00:06:23.024 01:34:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.024 01:34:46 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:23.024 01:34:46 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:23.024 01:34:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.024 ************************************ 00:06:23.024 START TEST accel_copy_crc32c_C2 00:06:23.024 ************************************ 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:23.024 [2024-05-15 01:34:46.725905] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:23.024 [2024-05-15 01:34:46.725968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923789 ] 00:06:23.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.024 [2024-05-15 01:34:46.796166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.024 [2024-05-15 01:34:46.884658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.024 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.025 01:34:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.397 00:06:24.397 real 0m1.399s 00:06:24.397 user 0m1.254s 00:06:24.397 sys 0m0.148s 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:24.397 01:34:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:24.397 ************************************ 00:06:24.397 END TEST accel_copy_crc32c_C2 00:06:24.397 ************************************ 00:06:24.397 01:34:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.397 01:34:48 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:24.397 01:34:48 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:24.397 01:34:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.397 ************************************ 00:06:24.397 START TEST accel_dualcast 00:06:24.397 ************************************ 00:06:24.397 01:34:48 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:24.397 01:34:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:24.397 [2024-05-15 01:34:48.180903] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:24.397 [2024-05-15 01:34:48.180967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923949 ] 00:06:24.397 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.397 [2024-05-15 01:34:48.253469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.655 [2024-05-15 01:34:48.344169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.655 01:34:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:26.029 01:34:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.029 00:06:26.029 real 0m1.419s 00:06:26.029 user 0m1.266s 00:06:26.029 sys 0m0.155s 00:06:26.029 01:34:49 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:26.029 01:34:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:26.029 ************************************ 00:06:26.029 END TEST accel_dualcast 00:06:26.029 ************************************ 00:06:26.029 01:34:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:26.029 01:34:49 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:26.029 01:34:49 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:26.029 01:34:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.029 ************************************ 00:06:26.029 START TEST accel_compare 00:06:26.029 ************************************ 00:06:26.029 01:34:49 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:06:26.029 01:34:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:26.030 [2024-05-15 01:34:49.650140] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:26.030 [2024-05-15 01:34:49.650203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924103 ] 00:06:26.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.030 [2024-05-15 01:34:49.721838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.030 [2024-05-15 01:34:49.812336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.030 01:34:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:27.400 01:34:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.400 00:06:27.400 real 0m1.407s 00:06:27.400 user 0m1.261s 00:06:27.400 sys 0m0.148s 00:06:27.400 01:34:51 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:27.400 01:34:51 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 ************************************ 00:06:27.400 END TEST accel_compare 00:06:27.400 ************************************ 00:06:27.400 01:34:51 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.400 01:34:51 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:27.400 01:34:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:27.400 01:34:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.400 ************************************ 00:06:27.400 START TEST accel_xor 00:06:27.401 ************************************ 00:06:27.401 01:34:51 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:27.401 01:34:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:27.401 [2024-05-15 01:34:51.109574] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:27.401 [2024-05-15 01:34:51.109636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924375 ] 00:06:27.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.401 [2024-05-15 01:34:51.180717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.401 [2024-05-15 01:34:51.271103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.658 01:34:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:28.590 01:34:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.590 00:06:28.590 real 0m1.417s 00:06:28.590 user 0m1.267s 00:06:28.590 sys 0m0.153s 00:06:28.590 01:34:52 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:28.590 01:34:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:28.590 ************************************ 00:06:28.590 END TEST accel_xor 00:06:28.590 ************************************ 00:06:28.848 01:34:52 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:28.848 01:34:52 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:28.848 01:34:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:28.848 01:34:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.848 ************************************ 00:06:28.848 START TEST accel_xor 00:06:28.848 ************************************ 00:06:28.848 01:34:52 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:28.848 01:34:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:28.848 [2024-05-15 01:34:52.579615] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:28.848 [2024-05-15 01:34:52.579672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924530 ] 00:06:28.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.848 [2024-05-15 01:34:52.652688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.848 [2024-05-15 01:34:52.743094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.106 01:34:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.038 01:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:30.039 01:34:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.039 00:06:30.039 real 0m1.394s 00:06:30.039 user 0m1.251s 00:06:30.039 sys 0m0.146s 00:06:30.039 01:34:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:30.039 01:34:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:30.039 ************************************ 00:06:30.039 END TEST accel_xor 00:06:30.039 ************************************ 00:06:30.295 01:34:53 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:30.295 01:34:53 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:30.295 01:34:53 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:30.295 01:34:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.295 ************************************ 00:06:30.295 START TEST accel_dif_verify 00:06:30.295 ************************************ 00:06:30.295 01:34:54 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.295 01:34:54 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.295 [2024-05-15 01:34:54.025338] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:30.295 [2024-05-15 01:34:54.025395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924695 ] 00:06:30.295 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.295 [2024-05-15 01:34:54.096038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.295 [2024-05-15 01:34:54.186557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.553 01:34:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.925 01:34:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.926 01:34:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:31.926 01:34:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.926 00:06:31.926 real 0m1.416s 00:06:31.926 user 0m1.254s 00:06:31.926 sys 0m0.166s 00:06:31.926 01:34:55 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:31.926 01:34:55 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.926 ************************************ 00:06:31.926 END TEST accel_dif_verify 00:06:31.926 ************************************ 00:06:31.926 01:34:55 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:31.926 01:34:55 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:31.926 01:34:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:31.926 01:34:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.926 ************************************ 00:06:31.926 START TEST accel_dif_generate 00:06:31.926 ************************************ 00:06:31.926 01:34:55 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:31.926 [2024-05-15 01:34:55.490163] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:31.926 [2024-05-15 01:34:55.490274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924848 ] 00:06:31.926 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.926 [2024-05-15 01:34:55.559853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.926 [2024-05-15 01:34:55.648725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.926 01:34:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.298 01:34:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:33.299 01:34:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.299 00:06:33.299 real 0m1.405s 00:06:33.299 user 0m1.266s 00:06:33.299 sys 0m0.144s 00:06:33.299 01:34:56 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:33.299 01:34:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 ************************************ 00:06:33.299 END TEST accel_dif_generate 00:06:33.299 ************************************ 00:06:33.299 01:34:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:33.299 01:34:56 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:33.299 01:34:56 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:33.299 01:34:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 ************************************ 00:06:33.299 START TEST accel_dif_generate_copy 00:06:33.299 ************************************ 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.299 01:34:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.299 [2024-05-15 01:34:56.945563] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:33.299 [2024-05-15 01:34:56.945628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925120 ] 00:06:33.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.299 [2024-05-15 01:34:57.014731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.299 [2024-05-15 01:34:57.105402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.299 01:34:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.671 00:06:34.671 real 0m1.406s 00:06:34.671 user 0m1.260s 00:06:34.671 sys 0m0.148s 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:34.671 01:34:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.671 ************************************ 00:06:34.671 END TEST accel_dif_generate_copy 00:06:34.671 ************************************ 00:06:34.671 01:34:58 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:34.671 01:34:58 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.671 01:34:58 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:34.671 01:34:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:34.671 01:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.671 ************************************ 00:06:34.671 START TEST accel_comp 00:06:34.671 ************************************ 00:06:34.671 01:34:58 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:34.671 01:34:58 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:34.671 [2024-05-15 01:34:58.403634] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:34.671 [2024-05-15 01:34:58.403700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925277 ] 00:06:34.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.671 [2024-05-15 01:34:58.474726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.671 [2024-05-15 01:34:58.567542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.929 01:34:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:35.902 01:34:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.902 00:06:35.902 real 0m1.419s 00:06:35.902 user 0m1.272s 00:06:35.902 sys 0m0.151s 00:06:35.902 01:34:59 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:35.902 01:34:59 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:35.902 ************************************ 00:06:35.902 END TEST accel_comp 00:06:35.902 ************************************ 00:06:35.902 01:34:59 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.902 01:34:59 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:35.902 01:34:59 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.159 01:34:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.159 ************************************ 00:06:36.159 START TEST accel_decomp 00:06:36.159 ************************************ 00:06:36.159 01:34:59 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:36.159 01:34:59 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:36.159 [2024-05-15 01:34:59.882764] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:36.159 [2024-05-15 01:34:59.882831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925442 ] 00:06:36.159 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.159 [2024-05-15 01:34:59.955103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.159 [2024-05-15 01:35:00.052263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.417 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.418 01:35:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.791 01:35:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.791 00:06:37.791 real 0m1.424s 00:06:37.791 user 0m1.261s 00:06:37.791 sys 0m0.165s 00:06:37.791 01:35:01 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:37.791 01:35:01 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:37.791 ************************************ 00:06:37.791 END TEST accel_decomp 00:06:37.791 ************************************ 00:06:37.791 01:35:01 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.791 01:35:01 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:37.791 01:35:01 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:37.791 01:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.791 ************************************ 00:06:37.792 START TEST accel_decmop_full 00:06:37.792 ************************************ 00:06:37.792 01:35:01 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:37.792 [2024-05-15 01:35:01.357834] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:37.792 [2024-05-15 01:35:01.357904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925697 ] 00:06:37.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.792 [2024-05-15 01:35:01.429404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.792 [2024-05-15 01:35:01.520039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.792 01:35:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.164 01:35:02 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.164 00:06:39.164 real 0m1.432s 00:06:39.164 user 0m1.284s 00:06:39.164 sys 0m0.151s 00:06:39.164 01:35:02 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:39.164 01:35:02 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:39.164 ************************************ 00:06:39.164 END TEST accel_decmop_full 00:06:39.164 ************************************ 00:06:39.164 01:35:02 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.164 01:35:02 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:39.164 01:35:02 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:39.164 01:35:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.164 ************************************ 00:06:39.164 START TEST accel_decomp_mcore 00:06:39.164 ************************************ 00:06:39.164 01:35:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.164 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:39.164 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:39.164 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:39.165 01:35:02 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:39.165 [2024-05-15 01:35:02.843333] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:39.165 [2024-05-15 01:35:02.843396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925867 ] 00:06:39.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.165 [2024-05-15 01:35:02.915459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.165 [2024-05-15 01:35:03.008089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.165 [2024-05-15 01:35:03.008156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.165 [2024-05-15 01:35:03.008248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.165 [2024-05-15 01:35:03.008251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.165 01:35:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.535 00:06:40.535 real 0m1.416s 00:06:40.535 user 0m4.691s 00:06:40.535 sys 0m0.159s 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:40.535 01:35:04 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:40.535 ************************************ 00:06:40.535 END TEST accel_decomp_mcore 00:06:40.535 ************************************ 00:06:40.535 01:35:04 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.535 01:35:04 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:40.535 01:35:04 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:40.535 01:35:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.535 ************************************ 00:06:40.536 START TEST accel_decomp_full_mcore 00:06:40.536 ************************************ 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:40.536 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:40.536 [2024-05-15 01:35:04.315910] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:40.536 [2024-05-15 01:35:04.315974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926032 ] 00:06:40.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.536 [2024-05-15 01:35:04.391725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.794 [2024-05-15 01:35:04.487729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.794 [2024-05-15 01:35:04.487794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.794 [2024-05-15 01:35:04.487890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.794 [2024-05-15 01:35:04.487893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 01:35:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.165 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.166 00:06:42.166 real 0m1.444s 00:06:42.166 user 0m4.771s 00:06:42.166 sys 0m0.162s 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:42.166 01:35:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 ************************************ 00:06:42.166 END TEST accel_decomp_full_mcore 00:06:42.166 ************************************ 00:06:42.166 01:35:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.166 01:35:05 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:42.166 01:35:05 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:42.166 01:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 ************************************ 00:06:42.166 START TEST accel_decomp_mthread 00:06:42.166 ************************************ 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:42.166 01:35:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:42.166 [2024-05-15 01:35:05.811200] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:42.166 [2024-05-15 01:35:05.811269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926191 ] 00:06:42.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.166 [2024-05-15 01:35:05.883577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.166 [2024-05-15 01:35:05.973157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.166 01:35:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.591 00:06:43.591 real 0m1.419s 00:06:43.591 user 0m1.260s 00:06:43.591 sys 0m0.163s 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:43.591 01:35:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:43.591 ************************************ 00:06:43.591 END TEST accel_decomp_mthread 00:06:43.591 ************************************ 00:06:43.591 01:35:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.591 01:35:07 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:43.591 01:35:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:43.591 01:35:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.591 ************************************ 00:06:43.591 START TEST accel_decomp_full_mthread 00:06:43.591 ************************************ 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:43.591 [2024-05-15 01:35:07.283791] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:43.591 [2024-05-15 01:35:07.283852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926461 ] 00:06:43.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.591 [2024-05-15 01:35:07.355454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.591 [2024-05-15 01:35:07.446004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:43.591 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.592 01:35:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.964 00:06:44.964 real 0m1.448s 00:06:44.964 user 0m1.295s 00:06:44.964 sys 0m0.157s 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:44.964 01:35:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:44.964 ************************************ 00:06:44.964 END TEST accel_decomp_full_mthread 00:06:44.964 ************************************ 00:06:44.964 01:35:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:44.964 01:35:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:44.964 01:35:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:44.964 01:35:08 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:44.964 01:35:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.964 01:35:08 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:44.964 01:35:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.964 01:35:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.964 01:35:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.964 01:35:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.964 01:35:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.964 01:35:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:44.964 01:35:08 accel -- accel/accel.sh@41 -- # jq -r . 00:06:44.964 ************************************ 00:06:44.964 START TEST accel_dif_functional_tests 00:06:44.964 ************************************ 00:06:44.964 01:35:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:44.964 [2024-05-15 01:35:08.802226] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:44.964 [2024-05-15 01:35:08.802299] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926618 ] 00:06:44.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.964 [2024-05-15 01:35:08.872711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.222 [2024-05-15 01:35:08.962872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.222 [2024-05-15 01:35:08.962937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.222 [2024-05-15 01:35:08.962939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.222 00:06:45.222 00:06:45.222 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.222 http://cunit.sourceforge.net/ 00:06:45.222 00:06:45.222 00:06:45.222 Suite: accel_dif 00:06:45.222 Test: verify: DIF generated, GUARD check ...passed 00:06:45.222 Test: verify: DIF generated, APPTAG check ...passed 00:06:45.222 Test: verify: DIF generated, REFTAG check ...passed 00:06:45.222 Test: verify: DIF not generated, GUARD check ...[2024-05-15 01:35:09.047706] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.222 [2024-05-15 01:35:09.047785] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.222 passed 00:06:45.222 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 01:35:09.047823] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.222 [2024-05-15 01:35:09.047849] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.222 passed 00:06:45.222 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 01:35:09.047895] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.222 [2024-05-15 01:35:09.047922] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.222 passed 00:06:45.222 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:45.222 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 01:35:09.047984] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:45.222 passed 00:06:45.222 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:45.222 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:45.222 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:45.222 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 01:35:09.048136] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:45.222 passed 00:06:45.222 Test: generate copy: DIF generated, GUARD check ...passed 00:06:45.222 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:45.222 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:45.222 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:45.222 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:45.222 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:45.222 Test: generate copy: iovecs-len validate ...[2024-05-15 01:35:09.048382] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:45.222 passed 00:06:45.222 Test: generate copy: buffer alignment validate ...passed 00:06:45.222 00:06:45.222 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.222 suites 1 1 n/a 0 0 00:06:45.222 tests 20 20 20 0 0 00:06:45.222 asserts 204 204 204 0 n/a 00:06:45.222 00:06:45.222 Elapsed time = 0.003 seconds 00:06:45.480 00:06:45.480 real 0m0.498s 00:06:45.480 user 0m0.765s 00:06:45.480 sys 0m0.178s 00:06:45.480 01:35:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:45.480 01:35:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:45.480 ************************************ 00:06:45.480 END TEST accel_dif_functional_tests 00:06:45.480 ************************************ 00:06:45.480 00:06:45.480 real 0m32.008s 00:06:45.480 user 0m35.122s 00:06:45.480 sys 0m4.812s 00:06:45.480 01:35:09 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:45.480 01:35:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.480 ************************************ 00:06:45.480 END TEST accel 00:06:45.480 ************************************ 00:06:45.480 01:35:09 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:45.480 01:35:09 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:45.480 01:35:09 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:45.480 01:35:09 -- common/autotest_common.sh@10 -- # set +x 00:06:45.480 ************************************ 00:06:45.480 START TEST accel_rpc 00:06:45.480 ************************************ 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:45.480 * Looking for test storage... 00:06:45.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:45.480 01:35:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.480 01:35:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3926807 00:06:45.480 01:35:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:45.480 01:35:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3926807 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 3926807 ']' 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:45.480 01:35:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.739 [2024-05-15 01:35:09.440268] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:45.739 [2024-05-15 01:35:09.440365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926807 ] 00:06:45.739 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.739 [2024-05-15 01:35:09.505684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.739 [2024-05-15 01:35:09.586531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.739 01:35:09 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:45.739 01:35:09 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:45.739 01:35:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:45.739 01:35:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:45.739 01:35:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:45.739 01:35:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:45.739 01:35:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:45.739 01:35:09 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:45.739 01:35:09 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:45.739 01:35:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.739 ************************************ 00:06:45.739 START TEST accel_assign_opcode 00:06:45.739 ************************************ 00:06:45.739 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:45.739 01:35:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:45.739 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.739 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.739 [2024-05-15 01:35:09.667175] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.997 [2024-05-15 01:35:09.675181] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.997 01:35:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:45.998 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:45.998 01:35:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:45.998 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.255 software 00:06:46.255 00:06:46.255 real 0m0.288s 00:06:46.255 user 0m0.033s 00:06:46.255 sys 0m0.008s 00:06:46.255 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:46.255 01:35:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:46.255 ************************************ 00:06:46.255 END TEST accel_assign_opcode 00:06:46.255 ************************************ 00:06:46.255 01:35:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3926807 00:06:46.255 01:35:09 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 3926807 ']' 00:06:46.255 01:35:09 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 3926807 00:06:46.255 01:35:09 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:46.255 01:35:09 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:46.255 01:35:09 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3926807 00:06:46.255 01:35:10 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:46.255 01:35:10 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:46.255 01:35:10 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3926807' 00:06:46.255 killing process with pid 3926807 00:06:46.255 01:35:10 accel_rpc -- common/autotest_common.sh@966 -- # kill 3926807 00:06:46.255 01:35:10 accel_rpc -- common/autotest_common.sh@971 -- # wait 3926807 00:06:46.513 00:06:46.513 real 0m1.068s 00:06:46.513 user 0m0.979s 00:06:46.513 sys 0m0.435s 00:06:46.513 01:35:10 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:46.513 01:35:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.513 ************************************ 00:06:46.513 END TEST accel_rpc 00:06:46.513 ************************************ 00:06:46.513 01:35:10 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:46.513 01:35:10 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:46.513 01:35:10 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:46.513 01:35:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.771 ************************************ 00:06:46.771 START TEST app_cmdline 00:06:46.771 ************************************ 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:46.771 * Looking for test storage... 00:06:46.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:46.771 01:35:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:46.771 01:35:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3927016 00:06:46.771 01:35:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:46.771 01:35:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3927016 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 3927016 ']' 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:46.771 01:35:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.771 [2024-05-15 01:35:10.554426] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:06:46.771 [2024-05-15 01:35:10.554534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927016 ] 00:06:46.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.771 [2024-05-15 01:35:10.623934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.030 [2024-05-15 01:35:10.710401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.030 01:35:10 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:47.030 01:35:10 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:47.030 01:35:10 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:47.288 { 00:06:47.288 "version": "SPDK v24.05-pre git sha1 4506c0c36", 00:06:47.288 "fields": { 00:06:47.288 "major": 24, 00:06:47.288 "minor": 5, 00:06:47.288 "patch": 0, 00:06:47.288 "suffix": "-pre", 00:06:47.288 "commit": "4506c0c36" 00:06:47.288 } 00:06:47.288 } 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:47.288 01:35:11 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.288 01:35:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.288 01:35:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:47.288 01:35:11 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.546 01:35:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:47.546 01:35:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:47.546 01:35:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:47.546 01:35:11 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:47.546 request: 00:06:47.546 { 00:06:47.546 "method": "env_dpdk_get_mem_stats", 00:06:47.546 "req_id": 1 00:06:47.546 } 00:06:47.546 Got JSON-RPC error response 00:06:47.546 response: 00:06:47.546 { 00:06:47.546 "code": -32601, 00:06:47.546 "message": "Method not found" 00:06:47.546 } 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:47.803 01:35:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3927016 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 3927016 ']' 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 3927016 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3927016 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3927016' 00:06:47.803 killing process with pid 3927016 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@966 -- # kill 3927016 00:06:47.803 01:35:11 app_cmdline -- common/autotest_common.sh@971 -- # wait 3927016 00:06:48.062 00:06:48.062 real 0m1.464s 00:06:48.062 user 0m1.789s 00:06:48.062 sys 0m0.461s 00:06:48.062 01:35:11 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:48.062 01:35:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.062 ************************************ 00:06:48.062 END TEST app_cmdline 00:06:48.062 ************************************ 00:06:48.062 01:35:11 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:48.062 01:35:11 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:48.062 01:35:11 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:48.062 01:35:11 -- common/autotest_common.sh@10 -- # set +x 00:06:48.062 ************************************ 00:06:48.062 START TEST version 00:06:48.062 ************************************ 00:06:48.062 01:35:11 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:48.321 * Looking for test storage... 00:06:48.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:48.321 01:35:12 version -- app/version.sh@17 -- # get_header_version major 00:06:48.321 01:35:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # cut -f2 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.321 01:35:12 version -- app/version.sh@17 -- # major=24 00:06:48.321 01:35:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:48.321 01:35:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # cut -f2 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.321 01:35:12 version -- app/version.sh@18 -- # minor=5 00:06:48.321 01:35:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:48.321 01:35:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # cut -f2 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.321 01:35:12 version -- app/version.sh@19 -- # patch=0 00:06:48.321 01:35:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:48.321 01:35:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # cut -f2 00:06:48.321 01:35:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.321 01:35:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:48.321 01:35:12 version -- app/version.sh@22 -- # version=24.5 00:06:48.321 01:35:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:48.321 01:35:12 version -- app/version.sh@28 -- # version=24.5rc0 00:06:48.321 01:35:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:48.321 01:35:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:48.321 01:35:12 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:48.321 01:35:12 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:48.321 00:06:48.321 real 0m0.110s 00:06:48.321 user 0m0.055s 00:06:48.321 sys 0m0.078s 00:06:48.321 01:35:12 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:48.321 01:35:12 version -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 ************************************ 00:06:48.321 END TEST version 00:06:48.321 ************************************ 00:06:48.321 01:35:12 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@194 -- # uname -s 00:06:48.321 01:35:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:48.321 01:35:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:48.321 01:35:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:48.321 01:35:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:48.321 01:35:12 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:48.321 01:35:12 -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 01:35:12 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:48.321 01:35:12 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:48.321 01:35:12 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:48.321 01:35:12 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:48.321 01:35:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:48.321 01:35:12 -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 ************************************ 00:06:48.321 START TEST nvmf_tcp 00:06:48.321 ************************************ 00:06:48.321 01:35:12 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:48.321 * Looking for test storage... 00:06:48.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.321 01:35:12 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.321 01:35:12 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.321 01:35:12 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.321 01:35:12 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 01:35:12 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 01:35:12 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 01:35:12 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:48.321 01:35:12 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:48.321 01:35:12 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:48.321 01:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:48.321 01:35:12 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:48.321 01:35:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:48.321 01:35:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:48.321 01:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.580 ************************************ 00:06:48.580 START TEST nvmf_example 00:06:48.580 ************************************ 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:48.580 * Looking for test storage... 00:06:48.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.580 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.581 01:35:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:51.112 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:51.112 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:51.112 Found net devices under 0000:09:00.0: cvl_0_0 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:51.112 Found net devices under 0000:09:00.1: cvl_0_1 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:06:51.112 00:06:51.112 --- 10.0.0.2 ping statistics --- 00:06:51.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.112 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:06:51.112 00:06:51.112 --- 10.0.0.1 ping statistics --- 00:06:51.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.112 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3929222 00:06:51.112 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3929222 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 3929222 ']' 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:51.113 01:35:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:52.047 01:35:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:52.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.330 Initializing NVMe Controllers 00:07:02.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.330 Initialization complete. Launching workers. 00:07:02.330 ======================================================== 00:07:02.330 Latency(us) 00:07:02.330 Device Information : IOPS MiB/s Average min max 00:07:02.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15262.13 59.62 4192.96 768.96 20208.32 00:07:02.330 ======================================================== 00:07:02.330 Total : 15262.13 59.62 4192.96 768.96 20208.32 00:07:02.330 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.330 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.330 rmmod nvme_tcp 00:07:02.588 rmmod nvme_fabrics 00:07:02.588 rmmod nvme_keyring 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3929222 ']' 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3929222 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 3929222 ']' 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 3929222 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3929222 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3929222' 00:07:02.588 killing process with pid 3929222 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 3929222 00:07:02.588 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 3929222 00:07:02.847 nvmf threads initialize successfully 00:07:02.847 bdev subsystem init successfully 00:07:02.847 created a nvmf target service 00:07:02.848 create targets's poll groups done 00:07:02.848 all subsystems of target started 00:07:02.848 nvmf target is running 00:07:02.848 all subsystems of target stopped 00:07:02.848 destroy targets's poll groups done 00:07:02.848 destroyed the nvmf target service 00:07:02.848 bdev subsystem finish successfully 00:07:02.848 nvmf threads destroy successfully 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.848 01:35:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.753 01:35:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:04.753 01:35:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:04.753 01:35:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:04.753 01:35:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.753 00:07:04.753 real 0m16.348s 00:07:04.753 user 0m44.377s 00:07:04.753 sys 0m4.079s 00:07:04.753 01:35:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:04.753 01:35:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.753 ************************************ 00:07:04.753 END TEST nvmf_example 00:07:04.753 ************************************ 00:07:04.753 01:35:28 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:04.753 01:35:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:04.753 01:35:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:04.753 01:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.753 ************************************ 00:07:04.753 START TEST nvmf_filesystem 00:07:04.753 ************************************ 00:07:04.753 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.015 * Looking for test storage... 00:07:05.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:05.015 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:05.016 #define SPDK_CONFIG_H 00:07:05.016 #define SPDK_CONFIG_APPS 1 00:07:05.016 #define SPDK_CONFIG_ARCH native 00:07:05.016 #undef SPDK_CONFIG_ASAN 00:07:05.016 #undef SPDK_CONFIG_AVAHI 00:07:05.016 #undef SPDK_CONFIG_CET 00:07:05.016 #define SPDK_CONFIG_COVERAGE 1 00:07:05.016 #define SPDK_CONFIG_CROSS_PREFIX 00:07:05.016 #undef SPDK_CONFIG_CRYPTO 00:07:05.016 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:05.016 #undef SPDK_CONFIG_CUSTOMOCF 00:07:05.016 #undef SPDK_CONFIG_DAOS 00:07:05.016 #define SPDK_CONFIG_DAOS_DIR 00:07:05.016 #define SPDK_CONFIG_DEBUG 1 00:07:05.016 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:05.016 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:05.016 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:05.016 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:05.016 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:05.016 #undef SPDK_CONFIG_DPDK_UADK 00:07:05.016 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.016 #define SPDK_CONFIG_EXAMPLES 1 00:07:05.016 #undef SPDK_CONFIG_FC 00:07:05.016 #define SPDK_CONFIG_FC_PATH 00:07:05.016 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:05.016 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:05.016 #undef SPDK_CONFIG_FUSE 00:07:05.016 #undef SPDK_CONFIG_FUZZER 00:07:05.016 #define SPDK_CONFIG_FUZZER_LIB 00:07:05.016 #undef SPDK_CONFIG_GOLANG 00:07:05.016 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:05.016 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:05.016 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:05.016 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:05.016 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:05.016 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:05.016 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:05.016 #define SPDK_CONFIG_IDXD 1 00:07:05.016 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:05.016 #undef SPDK_CONFIG_IPSEC_MB 00:07:05.016 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:05.016 #define SPDK_CONFIG_ISAL 1 00:07:05.016 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:05.016 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:05.016 #define SPDK_CONFIG_LIBDIR 00:07:05.016 #undef SPDK_CONFIG_LTO 00:07:05.016 #define SPDK_CONFIG_MAX_LCORES 00:07:05.016 #define SPDK_CONFIG_NVME_CUSE 1 00:07:05.016 #undef SPDK_CONFIG_OCF 00:07:05.016 #define SPDK_CONFIG_OCF_PATH 00:07:05.016 #define SPDK_CONFIG_OPENSSL_PATH 00:07:05.016 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:05.016 #define SPDK_CONFIG_PGO_DIR 00:07:05.016 #undef SPDK_CONFIG_PGO_USE 00:07:05.016 #define SPDK_CONFIG_PREFIX /usr/local 00:07:05.016 #undef SPDK_CONFIG_RAID5F 00:07:05.016 #undef SPDK_CONFIG_RBD 00:07:05.016 #define SPDK_CONFIG_RDMA 1 00:07:05.016 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:05.016 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:05.016 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:05.016 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:05.016 #define SPDK_CONFIG_SHARED 1 00:07:05.016 #undef SPDK_CONFIG_SMA 00:07:05.016 #define SPDK_CONFIG_TESTS 1 00:07:05.016 #undef SPDK_CONFIG_TSAN 00:07:05.016 #define SPDK_CONFIG_UBLK 1 00:07:05.016 #define SPDK_CONFIG_UBSAN 1 00:07:05.016 #undef SPDK_CONFIG_UNIT_TESTS 00:07:05.016 #undef SPDK_CONFIG_URING 00:07:05.016 #define SPDK_CONFIG_URING_PATH 00:07:05.016 #undef SPDK_CONFIG_URING_ZNS 00:07:05.016 #undef SPDK_CONFIG_USDT 00:07:05.016 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:05.016 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:05.016 #define SPDK_CONFIG_VFIO_USER 1 00:07:05.016 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:05.016 #define SPDK_CONFIG_VHOST 1 00:07:05.016 #define SPDK_CONFIG_VIRTIO 1 00:07:05.016 #undef SPDK_CONFIG_VTUNE 00:07:05.016 #define SPDK_CONFIG_VTUNE_DIR 00:07:05.016 #define SPDK_CONFIG_WERROR 1 00:07:05.016 #define SPDK_CONFIG_WPDK_DIR 00:07:05.016 #undef SPDK_CONFIG_XNVME 00:07:05.016 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.016 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:05.017 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3931044 ]] 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3931044 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.IUewOe 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.IUewOe/tests/target /tmp/spdk.IUewOe 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:05.018 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=978526208 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4305903616 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=49157099520 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994729472 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12837629952 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30992654336 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12389961728 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8986624 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996893696 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=471040 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:05.019 * Looking for test storage... 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=49157099520 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=15052222464 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.019 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.020 01:35:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:07.550 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:07.550 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:07.550 Found net devices under 0000:09:00.0: cvl_0_0 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:07.550 Found net devices under 0000:09:00.1: cvl_0_1 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.550 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:07:07.551 00:07:07.551 --- 10.0.0.2 ping statistics --- 00:07:07.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.551 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:07:07.551 00:07:07.551 --- 10.0.0.1 ping statistics --- 00:07:07.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.551 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:07.551 01:35:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.808 ************************************ 00:07:07.808 START TEST nvmf_filesystem_no_in_capsule 00:07:07.808 ************************************ 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3932961 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3932961 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 3932961 ']' 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:07.808 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.808 [2024-05-15 01:35:31.546598] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:07:07.808 [2024-05-15 01:35:31.546690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.808 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.808 [2024-05-15 01:35:31.626211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.808 [2024-05-15 01:35:31.719279] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.808 [2024-05-15 01:35:31.719345] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.808 [2024-05-15 01:35:31.719368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.808 [2024-05-15 01:35:31.719382] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.808 [2024-05-15 01:35:31.719394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.808 [2024-05-15 01:35:31.719453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.808 [2024-05-15 01:35:31.719505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.808 [2024-05-15 01:35:31.719621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.808 [2024-05-15 01:35:31.719623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.066 [2024-05-15 01:35:31.873048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.066 01:35:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 Malloc1 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 [2024-05-15 01:35:32.049588] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:08.325 [2024-05-15 01:35:32.049878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:08.325 { 00:07:08.325 "name": "Malloc1", 00:07:08.325 "aliases": [ 00:07:08.325 "d74ba7a2-c7e5-49fa-b46b-9ebf42b81f5f" 00:07:08.325 ], 00:07:08.325 "product_name": "Malloc disk", 00:07:08.325 "block_size": 512, 00:07:08.325 "num_blocks": 1048576, 00:07:08.325 "uuid": "d74ba7a2-c7e5-49fa-b46b-9ebf42b81f5f", 00:07:08.325 "assigned_rate_limits": { 00:07:08.325 "rw_ios_per_sec": 0, 00:07:08.325 "rw_mbytes_per_sec": 0, 00:07:08.325 "r_mbytes_per_sec": 0, 00:07:08.325 "w_mbytes_per_sec": 0 00:07:08.325 }, 00:07:08.325 "claimed": true, 00:07:08.325 "claim_type": "exclusive_write", 00:07:08.325 "zoned": false, 00:07:08.325 "supported_io_types": { 00:07:08.325 "read": true, 00:07:08.325 "write": true, 00:07:08.325 "unmap": true, 00:07:08.325 "write_zeroes": true, 00:07:08.325 "flush": true, 00:07:08.325 "reset": true, 00:07:08.325 "compare": false, 00:07:08.325 "compare_and_write": false, 00:07:08.325 "abort": true, 00:07:08.325 "nvme_admin": false, 00:07:08.325 "nvme_io": false 00:07:08.325 }, 00:07:08.325 "memory_domains": [ 00:07:08.325 { 00:07:08.325 "dma_device_id": "system", 00:07:08.325 "dma_device_type": 1 00:07:08.325 }, 00:07:08.325 { 00:07:08.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.325 "dma_device_type": 2 00:07:08.325 } 00:07:08.325 ], 00:07:08.325 "driver_specific": {} 00:07:08.325 } 00:07:08.325 ]' 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:08.325 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.890 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.890 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:08.890 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.890 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:08.890 01:35:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:10.786 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:10.786 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:10.786 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.786 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:10.786 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.786 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:10.787 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:11.044 01:35:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:11.608 01:35:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.979 ************************************ 00:07:12.979 START TEST filesystem_ext4 00:07:12.979 ************************************ 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:12.979 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:12.979 mke2fs 1.46.5 (30-Dec-2021) 00:07:12.980 Discarding device blocks: 0/522240 done 00:07:12.980 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:12.980 Filesystem UUID: 1068e7af-0ece-41e7-94f7-c88ca9159458 00:07:12.980 Superblock backups stored on blocks: 00:07:12.980 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:12.980 00:07:12.980 Allocating group tables: 0/64 done 00:07:12.980 Writing inode tables: 0/64 done 00:07:12.980 Creating journal (8192 blocks): done 00:07:12.980 Writing superblocks and filesystem accounting information: 0/64 done 00:07:12.980 00:07:12.980 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:12.980 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.237 01:35:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3932961 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.237 00:07:13.237 real 0m0.508s 00:07:13.237 user 0m0.015s 00:07:13.237 sys 0m0.029s 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:13.237 ************************************ 00:07:13.237 END TEST filesystem_ext4 00:07:13.237 ************************************ 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.237 ************************************ 00:07:13.237 START TEST filesystem_btrfs 00:07:13.237 ************************************ 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:13.237 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:13.495 btrfs-progs v6.6.2 00:07:13.495 See https://btrfs.readthedocs.io for more information. 00:07:13.495 00:07:13.495 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:13.495 NOTE: several default settings have changed in version 5.15, please make sure 00:07:13.495 this does not affect your deployments: 00:07:13.495 - DUP for metadata (-m dup) 00:07:13.495 - enabled no-holes (-O no-holes) 00:07:13.495 - enabled free-space-tree (-R free-space-tree) 00:07:13.495 00:07:13.495 Label: (null) 00:07:13.495 UUID: a14ad9da-4831-4d59-9a4a-3ba959f4fa12 00:07:13.495 Node size: 16384 00:07:13.495 Sector size: 4096 00:07:13.495 Filesystem size: 510.00MiB 00:07:13.495 Block group profiles: 00:07:13.495 Data: single 8.00MiB 00:07:13.495 Metadata: DUP 32.00MiB 00:07:13.495 System: DUP 8.00MiB 00:07:13.495 SSD detected: yes 00:07:13.495 Zoned device: no 00:07:13.495 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:13.495 Runtime features: free-space-tree 00:07:13.495 Checksum: crc32c 00:07:13.495 Number of devices: 1 00:07:13.495 Devices: 00:07:13.495 ID SIZE PATH 00:07:13.495 1 510.00MiB /dev/nvme0n1p1 00:07:13.495 00:07:13.495 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:13.495 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.752 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.753 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:13.753 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.753 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:13.753 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:13.753 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3932961 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.010 00:07:14.010 real 0m0.591s 00:07:14.010 user 0m0.010s 00:07:14.010 sys 0m0.051s 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:14.010 ************************************ 00:07:14.010 END TEST filesystem_btrfs 00:07:14.010 ************************************ 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.010 ************************************ 00:07:14.010 START TEST filesystem_xfs 00:07:14.010 ************************************ 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:14.010 01:35:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:14.010 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:14.010 = sectsz=512 attr=2, projid32bit=1 00:07:14.010 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:14.010 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:14.010 data = bsize=4096 blocks=130560, imaxpct=25 00:07:14.010 = sunit=0 swidth=0 blks 00:07:14.010 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:14.010 log =internal log bsize=4096 blocks=16384, version=2 00:07:14.010 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:14.010 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:14.943 Discarding blocks...Done. 00:07:14.943 01:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:14.943 01:35:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3932961 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.909 00:07:16.909 real 0m2.970s 00:07:16.909 user 0m0.019s 00:07:16.909 sys 0m0.034s 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.909 ************************************ 00:07:16.909 END TEST filesystem_xfs 00:07:16.909 ************************************ 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:16.909 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3932961 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 3932961 ']' 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 3932961 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3932961 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3932961' 00:07:17.167 killing process with pid 3932961 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 3932961 00:07:17.167 [2024-05-15 01:35:40.890246] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:17.167 01:35:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 3932961 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:17.425 00:07:17.425 real 0m9.825s 00:07:17.425 user 0m37.397s 00:07:17.425 sys 0m1.597s 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.425 ************************************ 00:07:17.425 END TEST nvmf_filesystem_no_in_capsule 00:07:17.425 ************************************ 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:17.425 01:35:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.684 ************************************ 00:07:17.684 START TEST nvmf_filesystem_in_capsule 00:07:17.684 ************************************ 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3934378 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3934378 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 3934378 ']' 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:17.684 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.684 [2024-05-15 01:35:41.426944] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:07:17.684 [2024-05-15 01:35:41.427028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.684 [2024-05-15 01:35:41.499063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.684 [2024-05-15 01:35:41.586344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.684 [2024-05-15 01:35:41.586409] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.684 [2024-05-15 01:35:41.586433] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.684 [2024-05-15 01:35:41.586447] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.684 [2024-05-15 01:35:41.586459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.684 [2024-05-15 01:35:41.586544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.684 [2024-05-15 01:35:41.586601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.684 [2024-05-15 01:35:41.586712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.684 [2024-05-15 01:35:41.586715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.943 [2024-05-15 01:35:41.738909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:17.943 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.201 Malloc1 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.201 [2024-05-15 01:35:41.924294] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:18.201 [2024-05-15 01:35:41.924641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:18.201 { 00:07:18.201 "name": "Malloc1", 00:07:18.201 "aliases": [ 00:07:18.201 "b15e3fe7-efb6-427d-b179-d978286159a5" 00:07:18.201 ], 00:07:18.201 "product_name": "Malloc disk", 00:07:18.201 "block_size": 512, 00:07:18.201 "num_blocks": 1048576, 00:07:18.201 "uuid": "b15e3fe7-efb6-427d-b179-d978286159a5", 00:07:18.201 "assigned_rate_limits": { 00:07:18.201 "rw_ios_per_sec": 0, 00:07:18.201 "rw_mbytes_per_sec": 0, 00:07:18.201 "r_mbytes_per_sec": 0, 00:07:18.201 "w_mbytes_per_sec": 0 00:07:18.201 }, 00:07:18.201 "claimed": true, 00:07:18.201 "claim_type": "exclusive_write", 00:07:18.201 "zoned": false, 00:07:18.201 "supported_io_types": { 00:07:18.201 "read": true, 00:07:18.201 "write": true, 00:07:18.201 "unmap": true, 00:07:18.201 "write_zeroes": true, 00:07:18.201 "flush": true, 00:07:18.201 "reset": true, 00:07:18.201 "compare": false, 00:07:18.201 "compare_and_write": false, 00:07:18.201 "abort": true, 00:07:18.201 "nvme_admin": false, 00:07:18.201 "nvme_io": false 00:07:18.201 }, 00:07:18.201 "memory_domains": [ 00:07:18.201 { 00:07:18.201 "dma_device_id": "system", 00:07:18.201 "dma_device_type": 1 00:07:18.201 }, 00:07:18.201 { 00:07:18.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.201 "dma_device_type": 2 00:07:18.201 } 00:07:18.201 ], 00:07:18.201 "driver_specific": {} 00:07:18.201 } 00:07:18.201 ]' 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:18.201 01:35:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:18.201 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:18.201 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:18.201 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:18.201 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.201 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.767 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.767 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:18.767 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.767 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:18.767 01:35:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:20.666 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:20.666 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:20.667 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.667 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:20.667 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.667 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:20.667 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:20.667 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:20.924 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:21.182 01:35:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:21.747 01:35:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.680 ************************************ 00:07:22.680 START TEST filesystem_in_capsule_ext4 00:07:22.680 ************************************ 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:22.680 01:35:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:22.680 mke2fs 1.46.5 (30-Dec-2021) 00:07:22.938 Discarding device blocks: 0/522240 done 00:07:22.938 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:22.938 Filesystem UUID: 4f596202-444b-4326-b4d4-6dc2cb0982ae 00:07:22.938 Superblock backups stored on blocks: 00:07:22.938 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:22.938 00:07:22.938 Allocating group tables: 0/64 done 00:07:22.938 Writing inode tables: 0/64 done 00:07:22.938 Creating journal (8192 blocks): done 00:07:23.453 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:23.453 00:07:23.453 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:23.453 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3934378 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.711 00:07:23.711 real 0m1.009s 00:07:23.711 user 0m0.013s 00:07:23.711 sys 0m0.038s 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:23.711 ************************************ 00:07:23.711 END TEST filesystem_in_capsule_ext4 00:07:23.711 ************************************ 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.711 ************************************ 00:07:23.711 START TEST filesystem_in_capsule_btrfs 00:07:23.711 ************************************ 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:23.711 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:23.969 btrfs-progs v6.6.2 00:07:23.969 See https://btrfs.readthedocs.io for more information. 00:07:23.969 00:07:23.969 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:23.969 NOTE: several default settings have changed in version 5.15, please make sure 00:07:23.969 this does not affect your deployments: 00:07:23.969 - DUP for metadata (-m dup) 00:07:23.969 - enabled no-holes (-O no-holes) 00:07:23.969 - enabled free-space-tree (-R free-space-tree) 00:07:23.969 00:07:23.969 Label: (null) 00:07:23.969 UUID: 4a2bf0a3-1c85-4911-8013-409c3f95bbde 00:07:23.969 Node size: 16384 00:07:23.969 Sector size: 4096 00:07:23.969 Filesystem size: 510.00MiB 00:07:23.969 Block group profiles: 00:07:23.969 Data: single 8.00MiB 00:07:23.969 Metadata: DUP 32.00MiB 00:07:23.969 System: DUP 8.00MiB 00:07:23.969 SSD detected: yes 00:07:23.969 Zoned device: no 00:07:23.969 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:23.969 Runtime features: free-space-tree 00:07:23.969 Checksum: crc32c 00:07:23.969 Number of devices: 1 00:07:23.969 Devices: 00:07:23.969 ID SIZE PATH 00:07:23.969 1 510.00MiB /dev/nvme0n1p1 00:07:23.969 00:07:23.969 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:23.969 01:35:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3934378 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.903 00:07:24.903 real 0m1.165s 00:07:24.903 user 0m0.019s 00:07:24.903 sys 0m0.041s 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:24.903 ************************************ 00:07:24.903 END TEST filesystem_in_capsule_btrfs 00:07:24.903 ************************************ 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.903 ************************************ 00:07:24.903 START TEST filesystem_in_capsule_xfs 00:07:24.903 ************************************ 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:24.903 01:35:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:25.161 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:25.161 = sectsz=512 attr=2, projid32bit=1 00:07:25.161 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:25.161 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:25.161 data = bsize=4096 blocks=130560, imaxpct=25 00:07:25.161 = sunit=0 swidth=0 blks 00:07:25.161 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:25.161 log =internal log bsize=4096 blocks=16384, version=2 00:07:25.161 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:25.161 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:26.095 Discarding blocks...Done. 00:07:26.095 01:35:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:26.095 01:35:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3934378 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.993 00:07:27.993 real 0m2.758s 00:07:27.993 user 0m0.014s 00:07:27.993 sys 0m0.036s 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:27.993 ************************************ 00:07:27.993 END TEST filesystem_in_capsule_xfs 00:07:27.993 ************************************ 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:27.993 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.252 01:35:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3934378 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 3934378 ']' 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 3934378 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3934378 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3934378' 00:07:28.252 killing process with pid 3934378 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 3934378 00:07:28.252 [2024-05-15 01:35:52.030378] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:28.252 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 3934378 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:28.819 00:07:28.819 real 0m11.079s 00:07:28.819 user 0m42.397s 00:07:28.819 sys 0m1.676s 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.819 ************************************ 00:07:28.819 END TEST nvmf_filesystem_in_capsule 00:07:28.819 ************************************ 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.819 rmmod nvme_tcp 00:07:28.819 rmmod nvme_fabrics 00:07:28.819 rmmod nvme_keyring 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.819 01:35:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.725 01:35:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.725 00:07:30.725 real 0m25.918s 00:07:30.725 user 1m20.909s 00:07:30.725 sys 0m5.196s 00:07:30.725 01:35:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:30.725 01:35:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.725 ************************************ 00:07:30.725 END TEST nvmf_filesystem 00:07:30.725 ************************************ 00:07:30.725 01:35:54 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:30.725 01:35:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:30.725 01:35:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:30.725 01:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.725 ************************************ 00:07:30.725 START TEST nvmf_target_discovery 00:07:30.725 ************************************ 00:07:30.725 01:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:30.983 * Looking for test storage... 00:07:30.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.983 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.984 01:35:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.514 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:33.515 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:33.515 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:33.515 Found net devices under 0000:09:00.0: cvl_0_0 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:33.515 Found net devices under 0000:09:00.1: cvl_0_1 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:33.515 00:07:33.515 --- 10.0.0.2 ping statistics --- 00:07:33.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.515 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:07:33.515 00:07:33.515 --- 10.0.0.1 ping statistics --- 00:07:33.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.515 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3938136 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3938136 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 3938136 ']' 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:33.515 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:33.515 [2024-05-15 01:35:57.441209] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:07:33.515 [2024-05-15 01:35:57.441317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.774 [2024-05-15 01:35:57.530703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.774 [2024-05-15 01:35:57.626919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.774 [2024-05-15 01:35:57.626984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.774 [2024-05-15 01:35:57.627001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.774 [2024-05-15 01:35:57.627015] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.774 [2024-05-15 01:35:57.627026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.774 [2024-05-15 01:35:57.627107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.774 [2024-05-15 01:35:57.627173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.774 [2024-05-15 01:35:57.627241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.774 [2024-05-15 01:35:57.627244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 [2024-05-15 01:35:57.782968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 Null1 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 [2024-05-15 01:35:57.823015] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:34.033 [2024-05-15 01:35:57.823361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 Null2 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 Null3 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 Null4 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.033 01:35:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:07:34.292 00:07:34.292 Discovery Log Number of Records 6, Generation counter 6 00:07:34.292 =====Discovery Log Entry 0====== 00:07:34.292 trtype: tcp 00:07:34.292 adrfam: ipv4 00:07:34.292 subtype: current discovery subsystem 00:07:34.292 treq: not required 00:07:34.292 portid: 0 00:07:34.292 trsvcid: 4420 00:07:34.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:34.292 traddr: 10.0.0.2 00:07:34.292 eflags: explicit discovery connections, duplicate discovery information 00:07:34.292 sectype: none 00:07:34.292 =====Discovery Log Entry 1====== 00:07:34.292 trtype: tcp 00:07:34.292 adrfam: ipv4 00:07:34.292 subtype: nvme subsystem 00:07:34.292 treq: not required 00:07:34.292 portid: 0 00:07:34.292 trsvcid: 4420 00:07:34.292 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:34.292 traddr: 10.0.0.2 00:07:34.292 eflags: none 00:07:34.292 sectype: none 00:07:34.292 =====Discovery Log Entry 2====== 00:07:34.292 trtype: tcp 00:07:34.292 adrfam: ipv4 00:07:34.292 subtype: nvme subsystem 00:07:34.292 treq: not required 00:07:34.292 portid: 0 00:07:34.292 trsvcid: 4420 00:07:34.292 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:34.292 traddr: 10.0.0.2 00:07:34.292 eflags: none 00:07:34.292 sectype: none 00:07:34.292 =====Discovery Log Entry 3====== 00:07:34.292 trtype: tcp 00:07:34.292 adrfam: ipv4 00:07:34.292 subtype: nvme subsystem 00:07:34.292 treq: not required 00:07:34.292 portid: 0 00:07:34.292 trsvcid: 4420 00:07:34.292 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:34.292 traddr: 10.0.0.2 00:07:34.292 eflags: none 00:07:34.292 sectype: none 00:07:34.292 =====Discovery Log Entry 4====== 00:07:34.292 trtype: tcp 00:07:34.292 adrfam: ipv4 00:07:34.292 subtype: nvme subsystem 00:07:34.292 treq: not required 00:07:34.292 portid: 0 00:07:34.292 trsvcid: 4420 00:07:34.292 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:34.292 traddr: 10.0.0.2 00:07:34.292 eflags: none 00:07:34.292 sectype: none 00:07:34.292 =====Discovery Log Entry 5====== 00:07:34.292 trtype: tcp 00:07:34.292 adrfam: ipv4 00:07:34.292 subtype: discovery subsystem referral 00:07:34.292 treq: not required 00:07:34.292 portid: 0 00:07:34.292 trsvcid: 4430 00:07:34.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:34.292 traddr: 10.0.0.2 00:07:34.292 eflags: none 00:07:34.292 sectype: none 00:07:34.292 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:34.292 Perform nvmf subsystem discovery via RPC 00:07:34.292 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:34.292 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.292 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.292 [ 00:07:34.292 { 00:07:34.292 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:34.292 "subtype": "Discovery", 00:07:34.292 "listen_addresses": [ 00:07:34.292 { 00:07:34.292 "trtype": "TCP", 00:07:34.292 "adrfam": "IPv4", 00:07:34.292 "traddr": "10.0.0.2", 00:07:34.292 "trsvcid": "4420" 00:07:34.292 } 00:07:34.292 ], 00:07:34.292 "allow_any_host": true, 00:07:34.292 "hosts": [] 00:07:34.292 }, 00:07:34.292 { 00:07:34.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:34.292 "subtype": "NVMe", 00:07:34.292 "listen_addresses": [ 00:07:34.292 { 00:07:34.292 "trtype": "TCP", 00:07:34.292 "adrfam": "IPv4", 00:07:34.292 "traddr": "10.0.0.2", 00:07:34.292 "trsvcid": "4420" 00:07:34.292 } 00:07:34.292 ], 00:07:34.292 "allow_any_host": true, 00:07:34.292 "hosts": [], 00:07:34.292 "serial_number": "SPDK00000000000001", 00:07:34.292 "model_number": "SPDK bdev Controller", 00:07:34.292 "max_namespaces": 32, 00:07:34.292 "min_cntlid": 1, 00:07:34.292 "max_cntlid": 65519, 00:07:34.292 "namespaces": [ 00:07:34.292 { 00:07:34.292 "nsid": 1, 00:07:34.292 "bdev_name": "Null1", 00:07:34.292 "name": "Null1", 00:07:34.292 "nguid": "337047C8010E4D81BC64F147A0C115AA", 00:07:34.292 "uuid": "337047c8-010e-4d81-bc64-f147a0c115aa" 00:07:34.292 } 00:07:34.292 ] 00:07:34.292 }, 00:07:34.292 { 00:07:34.292 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:34.292 "subtype": "NVMe", 00:07:34.292 "listen_addresses": [ 00:07:34.292 { 00:07:34.292 "trtype": "TCP", 00:07:34.292 "adrfam": "IPv4", 00:07:34.292 "traddr": "10.0.0.2", 00:07:34.292 "trsvcid": "4420" 00:07:34.292 } 00:07:34.292 ], 00:07:34.292 "allow_any_host": true, 00:07:34.292 "hosts": [], 00:07:34.292 "serial_number": "SPDK00000000000002", 00:07:34.292 "model_number": "SPDK bdev Controller", 00:07:34.292 "max_namespaces": 32, 00:07:34.292 "min_cntlid": 1, 00:07:34.292 "max_cntlid": 65519, 00:07:34.292 "namespaces": [ 00:07:34.292 { 00:07:34.292 "nsid": 1, 00:07:34.292 "bdev_name": "Null2", 00:07:34.292 "name": "Null2", 00:07:34.292 "nguid": "E7B3389B0C034022899F85ED75DCE62F", 00:07:34.292 "uuid": "e7b3389b-0c03-4022-899f-85ed75dce62f" 00:07:34.292 } 00:07:34.292 ] 00:07:34.292 }, 00:07:34.292 { 00:07:34.292 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:34.292 "subtype": "NVMe", 00:07:34.292 "listen_addresses": [ 00:07:34.292 { 00:07:34.292 "trtype": "TCP", 00:07:34.292 "adrfam": "IPv4", 00:07:34.292 "traddr": "10.0.0.2", 00:07:34.292 "trsvcid": "4420" 00:07:34.292 } 00:07:34.292 ], 00:07:34.292 "allow_any_host": true, 00:07:34.292 "hosts": [], 00:07:34.292 "serial_number": "SPDK00000000000003", 00:07:34.292 "model_number": "SPDK bdev Controller", 00:07:34.292 "max_namespaces": 32, 00:07:34.292 "min_cntlid": 1, 00:07:34.292 "max_cntlid": 65519, 00:07:34.292 "namespaces": [ 00:07:34.292 { 00:07:34.292 "nsid": 1, 00:07:34.292 "bdev_name": "Null3", 00:07:34.292 "name": "Null3", 00:07:34.292 "nguid": "AEA1305DB75E46069A32FB55000CF25D", 00:07:34.292 "uuid": "aea1305d-b75e-4606-9a32-fb55000cf25d" 00:07:34.292 } 00:07:34.292 ] 00:07:34.292 }, 00:07:34.292 { 00:07:34.292 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:34.292 "subtype": "NVMe", 00:07:34.292 "listen_addresses": [ 00:07:34.292 { 00:07:34.292 "trtype": "TCP", 00:07:34.292 "adrfam": "IPv4", 00:07:34.292 "traddr": "10.0.0.2", 00:07:34.293 "trsvcid": "4420" 00:07:34.293 } 00:07:34.293 ], 00:07:34.293 "allow_any_host": true, 00:07:34.293 "hosts": [], 00:07:34.293 "serial_number": "SPDK00000000000004", 00:07:34.293 "model_number": "SPDK bdev Controller", 00:07:34.293 "max_namespaces": 32, 00:07:34.293 "min_cntlid": 1, 00:07:34.293 "max_cntlid": 65519, 00:07:34.293 "namespaces": [ 00:07:34.293 { 00:07:34.293 "nsid": 1, 00:07:34.293 "bdev_name": "Null4", 00:07:34.293 "name": "Null4", 00:07:34.293 "nguid": "F3BD3E2953D64FBBA29F058752E24780", 00:07:34.293 "uuid": "f3bd3e29-53d6-4fbb-a29f-058752e24780" 00:07:34.293 } 00:07:34.293 ] 00:07:34.293 } 00:07:34.293 ] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.293 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.293 rmmod nvme_tcp 00:07:34.551 rmmod nvme_fabrics 00:07:34.551 rmmod nvme_keyring 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3938136 ']' 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3938136 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 3938136 ']' 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 3938136 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3938136 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3938136' 00:07:34.551 killing process with pid 3938136 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 3938136 00:07:34.551 [2024-05-15 01:35:58.285065] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:34.551 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 3938136 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.810 01:35:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.712 01:36:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:36.712 00:07:36.712 real 0m5.924s 00:07:36.712 user 0m4.497s 00:07:36.712 sys 0m2.200s 00:07:36.712 01:36:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:36.712 01:36:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:36.712 ************************************ 00:07:36.712 END TEST nvmf_target_discovery 00:07:36.712 ************************************ 00:07:36.712 01:36:00 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:36.712 01:36:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:36.712 01:36:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:36.712 01:36:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.712 ************************************ 00:07:36.712 START TEST nvmf_referrals 00:07:36.712 ************************************ 00:07:36.712 01:36:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:36.712 * Looking for test storage... 00:07:36.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.972 01:36:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:39.501 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:39.501 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:39.501 Found net devices under 0000:09:00.0: cvl_0_0 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:39.501 Found net devices under 0000:09:00.1: cvl_0_1 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.501 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:07:39.502 00:07:39.502 --- 10.0.0.2 ping statistics --- 00:07:39.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.502 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:39.502 00:07:39.502 --- 10.0.0.1 ping statistics --- 00:07:39.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.502 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3940636 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3940636 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 3940636 ']' 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:39.502 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.502 [2024-05-15 01:36:03.372695] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:07:39.502 [2024-05-15 01:36:03.372791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.760 [2024-05-15 01:36:03.449165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.760 [2024-05-15 01:36:03.539415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.760 [2024-05-15 01:36:03.539482] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.760 [2024-05-15 01:36:03.539518] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.760 [2024-05-15 01:36:03.539531] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.760 [2024-05-15 01:36:03.539541] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.760 [2024-05-15 01:36:03.539592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.760 [2024-05-15 01:36:03.539655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.760 [2024-05-15 01:36:03.539721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.760 [2024-05-15 01:36:03.539723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:39.760 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 [2024-05-15 01:36:03.692064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 [2024-05-15 01:36:03.704022] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:40.040 [2024-05-15 01:36:03.704372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.040 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:40.354 01:36:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:40.354 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.612 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:40.870 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.129 01:36:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.129 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.129 rmmod nvme_tcp 00:07:41.129 rmmod nvme_fabrics 00:07:41.388 rmmod nvme_keyring 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3940636 ']' 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3940636 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 3940636 ']' 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 3940636 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3940636 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3940636' 00:07:41.388 killing process with pid 3940636 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 3940636 00:07:41.388 [2024-05-15 01:36:05.114381] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:41.388 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 3940636 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.647 01:36:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.549 01:36:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.549 00:07:43.549 real 0m6.776s 00:07:43.549 user 0m8.603s 00:07:43.549 sys 0m2.333s 00:07:43.549 01:36:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:43.549 01:36:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:43.549 ************************************ 00:07:43.549 END TEST nvmf_referrals 00:07:43.549 ************************************ 00:07:43.549 01:36:07 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:43.549 01:36:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:43.549 01:36:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:43.549 01:36:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.549 ************************************ 00:07:43.549 START TEST nvmf_connect_disconnect 00:07:43.549 ************************************ 00:07:43.549 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:43.549 * Looking for test storage... 00:07:43.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.549 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.549 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.807 01:36:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:46.338 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:46.338 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:46.338 Found net devices under 0000:09:00.0: cvl_0_0 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:46.338 Found net devices under 0000:09:00.1: cvl_0_1 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.338 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.339 01:36:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:07:46.339 00:07:46.339 --- 10.0.0.2 ping statistics --- 00:07:46.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.339 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:46.339 00:07:46.339 --- 10.0.0.1 ping statistics --- 00:07:46.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.339 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3943729 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3943729 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 3943729 ']' 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:46.339 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.339 [2024-05-15 01:36:10.124393] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:07:46.339 [2024-05-15 01:36:10.124467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.339 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.339 [2024-05-15 01:36:10.201118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.599 [2024-05-15 01:36:10.293619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.599 [2024-05-15 01:36:10.293674] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.599 [2024-05-15 01:36:10.293690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.599 [2024-05-15 01:36:10.293703] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.599 [2024-05-15 01:36:10.293715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.599 [2024-05-15 01:36:10.293789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.599 [2024-05-15 01:36:10.293849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.599 [2024-05-15 01:36:10.293873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.599 [2024-05-15 01:36:10.293877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 [2024-05-15 01:36:10.452226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 [2024-05-15 01:36:10.513321] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:46.599 [2024-05-15 01:36:10.513669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:46.599 01:36:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:49.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.836 [2024-05-15 01:39:09.145864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55fc0 is same with the state(5) to be set 00:10:45.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.358 rmmod nvme_tcp 00:11:33.358 rmmod nvme_fabrics 00:11:33.358 rmmod nvme_keyring 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3943729 ']' 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3943729 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 3943729 ']' 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 3943729 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3943729 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3943729' 00:11:33.358 killing process with pid 3943729 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 3943729 00:11:33.358 [2024-05-15 01:39:56.908814] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:33.358 01:39:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 3943729 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.358 01:39:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.890 01:39:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.890 00:11:35.890 real 3m51.779s 00:11:35.890 user 14m40.797s 00:11:35.890 sys 0m31.526s 00:11:35.890 01:39:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:35.890 01:39:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.890 ************************************ 00:11:35.890 END TEST nvmf_connect_disconnect 00:11:35.890 ************************************ 00:11:35.890 01:39:59 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.890 01:39:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:35.890 01:39:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:35.890 01:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.890 ************************************ 00:11:35.890 START TEST nvmf_multitarget 00:11:35.890 ************************************ 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.890 * Looking for test storage... 00:11:35.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.890 01:39:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:38.421 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:38.421 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:38.421 Found net devices under 0000:09:00.0: cvl_0_0 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:38.421 Found net devices under 0000:09:00.1: cvl_0_1 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:11:38.421 00:11:38.421 --- 10.0.0.2 ping statistics --- 00:11:38.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.421 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:38.421 00:11:38.421 --- 10.0.0.1 ping statistics --- 00:11:38.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.421 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3974522 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.421 01:40:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3974522 00:11:38.422 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 3974522 ']' 00:11:38.422 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.422 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:38.422 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.422 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:38.422 01:40:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:38.422 [2024-05-15 01:40:01.938850] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:11:38.422 [2024-05-15 01:40:01.938931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.422 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.422 [2024-05-15 01:40:02.015349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.422 [2024-05-15 01:40:02.105324] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.422 [2024-05-15 01:40:02.105379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.422 [2024-05-15 01:40:02.105408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.422 [2024-05-15 01:40:02.105420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.422 [2024-05-15 01:40:02.105430] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.422 [2024-05-15 01:40:02.109236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.422 [2024-05-15 01:40:02.109319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.422 [2024-05-15 01:40:02.109365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.422 [2024-05-15 01:40:02.109369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:38.422 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:38.679 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:38.679 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:38.679 "nvmf_tgt_1" 00:11:38.680 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:38.680 "nvmf_tgt_2" 00:11:38.680 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:38.680 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:38.937 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:38.937 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:38.937 true 00:11:38.937 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:39.194 true 00:11:39.194 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:39.194 01:40:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.194 rmmod nvme_tcp 00:11:39.194 rmmod nvme_fabrics 00:11:39.194 rmmod nvme_keyring 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3974522 ']' 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3974522 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 3974522 ']' 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 3974522 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:39.194 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3974522 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3974522' 00:11:39.453 killing process with pid 3974522 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 3974522 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 3974522 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.453 01:40:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.989 01:40:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.989 00:11:41.989 real 0m6.157s 00:11:41.989 user 0m6.592s 00:11:41.989 sys 0m2.230s 00:11:41.989 01:40:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:41.989 01:40:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:41.989 ************************************ 00:11:41.989 END TEST nvmf_multitarget 00:11:41.989 ************************************ 00:11:41.989 01:40:05 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:41.989 01:40:05 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:41.989 01:40:05 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:41.989 01:40:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.989 ************************************ 00:11:41.989 START TEST nvmf_rpc 00:11:41.989 ************************************ 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:41.989 * Looking for test storage... 00:11:41.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.989 01:40:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.990 01:40:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:44.519 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:44.519 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:44.519 Found net devices under 0000:09:00.0: cvl_0_0 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:44.519 Found net devices under 0000:09:00.1: cvl_0_1 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.519 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:11:44.520 00:11:44.520 --- 10.0.0.2 ping statistics --- 00:11:44.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.520 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:11:44.520 00:11:44.520 --- 10.0.0.1 ping statistics --- 00:11:44.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.520 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3976914 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3976914 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 3976914 ']' 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:44.520 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.520 [2024-05-15 01:40:08.213640] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:11:44.520 [2024-05-15 01:40:08.213735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.520 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.520 [2024-05-15 01:40:08.297747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.520 [2024-05-15 01:40:08.390074] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.520 [2024-05-15 01:40:08.390141] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.520 [2024-05-15 01:40:08.390157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.520 [2024-05-15 01:40:08.390171] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.520 [2024-05-15 01:40:08.390183] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.520 [2024-05-15 01:40:08.392241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.520 [2024-05-15 01:40:08.392310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.520 [2024-05-15 01:40:08.392359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.520 [2024-05-15 01:40:08.392362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:44.778 "tick_rate": 2700000000, 00:11:44.778 "poll_groups": [ 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_000", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [] 00:11:44.778 }, 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_001", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [] 00:11:44.778 }, 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_002", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [] 00:11:44.778 }, 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_003", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [] 00:11:44.778 } 00:11:44.778 ] 00:11:44.778 }' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.778 [2024-05-15 01:40:08.640131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:44.778 "tick_rate": 2700000000, 00:11:44.778 "poll_groups": [ 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_000", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [ 00:11:44.778 { 00:11:44.778 "trtype": "TCP" 00:11:44.778 } 00:11:44.778 ] 00:11:44.778 }, 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_001", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [ 00:11:44.778 { 00:11:44.778 "trtype": "TCP" 00:11:44.778 } 00:11:44.778 ] 00:11:44.778 }, 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_002", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [ 00:11:44.778 { 00:11:44.778 "trtype": "TCP" 00:11:44.778 } 00:11:44.778 ] 00:11:44.778 }, 00:11:44.778 { 00:11:44.778 "name": "nvmf_tgt_poll_group_003", 00:11:44.778 "admin_qpairs": 0, 00:11:44.778 "io_qpairs": 0, 00:11:44.778 "current_admin_qpairs": 0, 00:11:44.778 "current_io_qpairs": 0, 00:11:44.778 "pending_bdev_io": 0, 00:11:44.778 "completed_nvme_io": 0, 00:11:44.778 "transports": [ 00:11:44.778 { 00:11:44.778 "trtype": "TCP" 00:11:44.778 } 00:11:44.778 ] 00:11:44.778 } 00:11:44.778 ] 00:11:44.778 }' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:44.778 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.037 Malloc1 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.037 [2024-05-15 01:40:08.801001] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:45.037 [2024-05-15 01:40:08.801355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:45.037 [2024-05-15 01:40:08.823783] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:45.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:45.037 could not add new controller: failed to write to nvme-fabrics device 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:45.037 01:40:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.603 01:40:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.603 01:40:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:45.603 01:40:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.603 01:40:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:45.603 01:40:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:11:48.131 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.132 [2024-05-15 01:40:11.612511] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:48.132 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:48.132 could not add new controller: failed to write to nvme-fabrics device 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:48.132 01:40:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.389 01:40:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.389 01:40:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:48.389 01:40:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.389 01:40:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:48.389 01:40:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:50.284 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.540 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 [2024-05-15 01:40:14.289425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:50.541 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.147 01:40:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.147 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:51.147 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.147 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:51.147 01:40:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:53.044 01:40:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.302 [2024-05-15 01:40:17.060621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.302 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.303 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:53.303 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:53.303 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.303 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:53.303 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.867 01:40:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.867 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:53.867 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.867 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:53.867 01:40:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:55.764 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.022 [2024-05-15 01:40:19.822932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.022 01:40:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.587 01:40:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.587 01:40:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:56.587 01:40:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.587 01:40:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:56.587 01:40:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:11:58.484 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.742 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.743 [2024-05-15 01:40:22.551333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.743 01:40:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.307 01:40:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.307 01:40:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:11:59.307 01:40:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.307 01:40:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:59.307 01:40:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:01.203 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:01.203 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:01.203 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 [2024-05-15 01:40:25.316648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.461 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.462 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.027 01:40:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.027 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:02.027 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.027 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:02.027 01:40:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:04.556 01:40:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 [2024-05-15 01:40:28.093875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 [2024-05-15 01:40:28.141925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 [2024-05-15 01:40:28.190081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 [2024-05-15 01:40:28.238276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.556 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 [2024-05-15 01:40:28.286440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:04.557 "tick_rate": 2700000000, 00:12:04.557 "poll_groups": [ 00:12:04.557 { 00:12:04.557 "name": "nvmf_tgt_poll_group_000", 00:12:04.557 "admin_qpairs": 2, 00:12:04.557 "io_qpairs": 84, 00:12:04.557 "current_admin_qpairs": 0, 00:12:04.557 "current_io_qpairs": 0, 00:12:04.557 "pending_bdev_io": 0, 00:12:04.557 "completed_nvme_io": 184, 00:12:04.557 "transports": [ 00:12:04.557 { 00:12:04.557 "trtype": "TCP" 00:12:04.557 } 00:12:04.557 ] 00:12:04.557 }, 00:12:04.557 { 00:12:04.557 "name": "nvmf_tgt_poll_group_001", 00:12:04.557 "admin_qpairs": 2, 00:12:04.557 "io_qpairs": 84, 00:12:04.557 "current_admin_qpairs": 0, 00:12:04.557 "current_io_qpairs": 0, 00:12:04.557 "pending_bdev_io": 0, 00:12:04.557 "completed_nvme_io": 159, 00:12:04.557 "transports": [ 00:12:04.557 { 00:12:04.557 "trtype": "TCP" 00:12:04.557 } 00:12:04.557 ] 00:12:04.557 }, 00:12:04.557 { 00:12:04.557 "name": "nvmf_tgt_poll_group_002", 00:12:04.557 "admin_qpairs": 1, 00:12:04.557 "io_qpairs": 84, 00:12:04.557 "current_admin_qpairs": 0, 00:12:04.557 "current_io_qpairs": 0, 00:12:04.557 "pending_bdev_io": 0, 00:12:04.557 "completed_nvme_io": 136, 00:12:04.557 "transports": [ 00:12:04.557 { 00:12:04.557 "trtype": "TCP" 00:12:04.557 } 00:12:04.557 ] 00:12:04.557 }, 00:12:04.557 { 00:12:04.557 "name": "nvmf_tgt_poll_group_003", 00:12:04.557 "admin_qpairs": 2, 00:12:04.557 "io_qpairs": 84, 00:12:04.557 "current_admin_qpairs": 0, 00:12:04.557 "current_io_qpairs": 0, 00:12:04.557 "pending_bdev_io": 0, 00:12:04.557 "completed_nvme_io": 207, 00:12:04.557 "transports": [ 00:12:04.557 { 00:12:04.557 "trtype": "TCP" 00:12:04.557 } 00:12:04.557 ] 00:12:04.557 } 00:12:04.557 ] 00:12:04.557 }' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.557 rmmod nvme_tcp 00:12:04.557 rmmod nvme_fabrics 00:12:04.557 rmmod nvme_keyring 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3976914 ']' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3976914 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 3976914 ']' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 3976914 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3976914 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3976914' 00:12:04.557 killing process with pid 3976914 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 3976914 00:12:04.557 [2024-05-15 01:40:28.470067] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:04.557 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 3976914 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.816 01:40:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.358 01:40:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:07.358 00:12:07.358 real 0m25.285s 00:12:07.358 user 1m20.624s 00:12:07.358 sys 0m4.149s 00:12:07.358 01:40:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:07.358 01:40:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.358 ************************************ 00:12:07.358 END TEST nvmf_rpc 00:12:07.358 ************************************ 00:12:07.358 01:40:30 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:07.358 01:40:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:07.358 01:40:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:07.358 01:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.358 ************************************ 00:12:07.358 START TEST nvmf_invalid 00:12:07.358 ************************************ 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:07.358 * Looking for test storage... 00:12:07.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.358 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.359 01:40:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:09.893 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:09.893 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:09.893 Found net devices under 0000:09:00.0: cvl_0_0 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:09.893 Found net devices under 0000:09:00.1: cvl_0_1 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:09.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:12:09.893 00:12:09.893 --- 10.0.0.2 ping statistics --- 00:12:09.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.893 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:09.893 00:12:09.893 --- 10.0.0.1 ping statistics --- 00:12:09.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.893 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:09.893 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3981808 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3981808 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 3981808 ']' 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:09.894 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.894 [2024-05-15 01:40:33.633512] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:12:09.894 [2024-05-15 01:40:33.633600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.894 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.894 [2024-05-15 01:40:33.716186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.894 [2024-05-15 01:40:33.809129] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.894 [2024-05-15 01:40:33.809180] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.894 [2024-05-15 01:40:33.809204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.894 [2024-05-15 01:40:33.809222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.894 [2024-05-15 01:40:33.809241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.894 [2024-05-15 01:40:33.809292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.894 [2024-05-15 01:40:33.809348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.894 [2024-05-15 01:40:33.809415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.894 [2024-05-15 01:40:33.809418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:10.238 01:40:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17004 00:12:10.494 [2024-05-15 01:40:34.177672] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:10.494 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:10.494 { 00:12:10.494 "nqn": "nqn.2016-06.io.spdk:cnode17004", 00:12:10.494 "tgt_name": "foobar", 00:12:10.494 "method": "nvmf_create_subsystem", 00:12:10.494 "req_id": 1 00:12:10.494 } 00:12:10.494 Got JSON-RPC error response 00:12:10.494 response: 00:12:10.494 { 00:12:10.494 "code": -32603, 00:12:10.494 "message": "Unable to find target foobar" 00:12:10.494 }' 00:12:10.494 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:10.494 { 00:12:10.494 "nqn": "nqn.2016-06.io.spdk:cnode17004", 00:12:10.494 "tgt_name": "foobar", 00:12:10.494 "method": "nvmf_create_subsystem", 00:12:10.494 "req_id": 1 00:12:10.494 } 00:12:10.494 Got JSON-RPC error response 00:12:10.494 response: 00:12:10.494 { 00:12:10.494 "code": -32603, 00:12:10.494 "message": "Unable to find target foobar" 00:12:10.494 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:10.494 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:10.494 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27591 00:12:10.750 [2024-05-15 01:40:34.474709] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27591: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:10.750 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:10.750 { 00:12:10.750 "nqn": "nqn.2016-06.io.spdk:cnode27591", 00:12:10.750 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:10.750 "method": "nvmf_create_subsystem", 00:12:10.750 "req_id": 1 00:12:10.750 } 00:12:10.750 Got JSON-RPC error response 00:12:10.750 response: 00:12:10.750 { 00:12:10.750 "code": -32602, 00:12:10.750 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:10.750 }' 00:12:10.750 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:10.751 { 00:12:10.751 "nqn": "nqn.2016-06.io.spdk:cnode27591", 00:12:10.751 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:10.751 "method": "nvmf_create_subsystem", 00:12:10.751 "req_id": 1 00:12:10.751 } 00:12:10.751 Got JSON-RPC error response 00:12:10.751 response: 00:12:10.751 { 00:12:10.751 "code": -32602, 00:12:10.751 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:10.751 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.751 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:10.751 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32378 00:12:11.008 [2024-05-15 01:40:34.771674] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32378: invalid model number 'SPDK_Controller' 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:11.008 { 00:12:11.008 "nqn": "nqn.2016-06.io.spdk:cnode32378", 00:12:11.008 "model_number": "SPDK_Controller\u001f", 00:12:11.008 "method": "nvmf_create_subsystem", 00:12:11.008 "req_id": 1 00:12:11.008 } 00:12:11.008 Got JSON-RPC error response 00:12:11.008 response: 00:12:11.008 { 00:12:11.008 "code": -32602, 00:12:11.008 "message": "Invalid MN SPDK_Controller\u001f" 00:12:11.008 }' 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:11.008 { 00:12:11.008 "nqn": "nqn.2016-06.io.spdk:cnode32378", 00:12:11.008 "model_number": "SPDK_Controller\u001f", 00:12:11.008 "method": "nvmf_create_subsystem", 00:12:11.008 "req_id": 1 00:12:11.008 } 00:12:11.008 Got JSON-RPC error response 00:12:11.008 response: 00:12:11.008 { 00:12:11.008 "code": -32602, 00:12:11.008 "message": "Invalid MN SPDK_Controller\u001f" 00:12:11.008 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:11.008 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kV.]0@?Cw|1*7XFE+H\M`' 00:12:11.009 01:40:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'kV.]0@?Cw|1*7XFE+H\M`' nqn.2016-06.io.spdk:cnode15325 00:12:11.266 [2024-05-15 01:40:35.132931] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15325: invalid serial number 'kV.]0@?Cw|1*7XFE+H\M`' 00:12:11.266 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:11.266 { 00:12:11.266 "nqn": "nqn.2016-06.io.spdk:cnode15325", 00:12:11.266 "serial_number": "kV.]0@?Cw|1*7XFE+H\\M`", 00:12:11.266 "method": "nvmf_create_subsystem", 00:12:11.266 "req_id": 1 00:12:11.266 } 00:12:11.266 Got JSON-RPC error response 00:12:11.266 response: 00:12:11.266 { 00:12:11.266 "code": -32602, 00:12:11.267 "message": "Invalid SN kV.]0@?Cw|1*7XFE+H\\M`" 00:12:11.267 }' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:11.267 { 00:12:11.267 "nqn": "nqn.2016-06.io.spdk:cnode15325", 00:12:11.267 "serial_number": "kV.]0@?Cw|1*7XFE+H\\M`", 00:12:11.267 "method": "nvmf_create_subsystem", 00:12:11.267 "req_id": 1 00:12:11.267 } 00:12:11.267 Got JSON-RPC error response 00:12:11.267 response: 00:12:11.267 { 00:12:11.267 "code": -32602, 00:12:11.267 "message": "Invalid SN kV.]0@?Cw|1*7XFE+H\\M`" 00:12:11.267 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:11.267 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:11.524 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OIgg7Ogm!2M;U/2K.w3y\4<~]X>xQ2Qsld[f4pt>' 00:12:11.525 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'OIgg7Ogm!2M;U/2K.w3y\4<~]X>xQ2Qsld[f4pt>' nqn.2016-06.io.spdk:cnode22151 00:12:11.782 [2024-05-15 01:40:35.510156] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22151: invalid model number 'OIgg7Ogm!2M;U/2K.w3y\4<~]X>xQ2Qsld[f4pt>' 00:12:11.782 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:11.782 { 00:12:11.782 "nqn": "nqn.2016-06.io.spdk:cnode22151", 00:12:11.782 "model_number": "OIgg7Ogm!2M;U/2K.w3y\\4<~]X>xQ2Qsld[f4p\u007ft>", 00:12:11.782 "method": "nvmf_create_subsystem", 00:12:11.782 "req_id": 1 00:12:11.782 } 00:12:11.782 Got JSON-RPC error response 00:12:11.782 response: 00:12:11.782 { 00:12:11.782 "code": -32602, 00:12:11.782 "message": "Invalid MN OIgg7Ogm!2M;U/2K.w3y\\4<~]X>xQ2Qsld[f4p\u007ft>" 00:12:11.782 }' 00:12:11.782 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:11.782 { 00:12:11.782 "nqn": "nqn.2016-06.io.spdk:cnode22151", 00:12:11.782 "model_number": "OIgg7Ogm!2M;U/2K.w3y\\4<~]X>xQ2Qsld[f4p\u007ft>", 00:12:11.783 "method": "nvmf_create_subsystem", 00:12:11.783 "req_id": 1 00:12:11.783 } 00:12:11.783 Got JSON-RPC error response 00:12:11.783 response: 00:12:11.783 { 00:12:11.783 "code": -32602, 00:12:11.783 "message": "Invalid MN OIgg7Ogm!2M;U/2K.w3y\\4<~]X>xQ2Qsld[f4p\u007ft>" 00:12:11.783 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:11.783 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:12.039 [2024-05-15 01:40:35.759079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.039 01:40:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:12.316 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:12.316 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:12.316 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:12.316 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:12.316 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:12.587 [2024-05-15 01:40:36.268750] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:12.588 [2024-05-15 01:40:36.268861] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:12.588 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:12.588 { 00:12:12.588 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:12.588 "listen_address": { 00:12:12.588 "trtype": "tcp", 00:12:12.588 "traddr": "", 00:12:12.588 "trsvcid": "4421" 00:12:12.588 }, 00:12:12.588 "method": "nvmf_subsystem_remove_listener", 00:12:12.588 "req_id": 1 00:12:12.588 } 00:12:12.588 Got JSON-RPC error response 00:12:12.588 response: 00:12:12.588 { 00:12:12.588 "code": -32602, 00:12:12.588 "message": "Invalid parameters" 00:12:12.588 }' 00:12:12.588 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:12.588 { 00:12:12.588 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:12.588 "listen_address": { 00:12:12.588 "trtype": "tcp", 00:12:12.588 "traddr": "", 00:12:12.588 "trsvcid": "4421" 00:12:12.588 }, 00:12:12.588 "method": "nvmf_subsystem_remove_listener", 00:12:12.588 "req_id": 1 00:12:12.588 } 00:12:12.588 Got JSON-RPC error response 00:12:12.588 response: 00:12:12.588 { 00:12:12.588 "code": -32602, 00:12:12.588 "message": "Invalid parameters" 00:12:12.588 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:12.588 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28766 -i 0 00:12:12.588 [2024-05-15 01:40:36.505548] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28766: invalid cntlid range [0-65519] 00:12:12.845 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:12.845 { 00:12:12.845 "nqn": "nqn.2016-06.io.spdk:cnode28766", 00:12:12.845 "min_cntlid": 0, 00:12:12.845 "method": "nvmf_create_subsystem", 00:12:12.845 "req_id": 1 00:12:12.845 } 00:12:12.845 Got JSON-RPC error response 00:12:12.845 response: 00:12:12.845 { 00:12:12.845 "code": -32602, 00:12:12.845 "message": "Invalid cntlid range [0-65519]" 00:12:12.845 }' 00:12:12.845 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:12.845 { 00:12:12.845 "nqn": "nqn.2016-06.io.spdk:cnode28766", 00:12:12.845 "min_cntlid": 0, 00:12:12.845 "method": "nvmf_create_subsystem", 00:12:12.845 "req_id": 1 00:12:12.845 } 00:12:12.845 Got JSON-RPC error response 00:12:12.845 response: 00:12:12.845 { 00:12:12.845 "code": -32602, 00:12:12.845 "message": "Invalid cntlid range [0-65519]" 00:12:12.845 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.845 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14218 -i 65520 00:12:12.845 [2024-05-15 01:40:36.750348] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14218: invalid cntlid range [65520-65519] 00:12:12.845 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:12.845 { 00:12:12.845 "nqn": "nqn.2016-06.io.spdk:cnode14218", 00:12:12.845 "min_cntlid": 65520, 00:12:12.845 "method": "nvmf_create_subsystem", 00:12:12.845 "req_id": 1 00:12:12.845 } 00:12:12.845 Got JSON-RPC error response 00:12:12.845 response: 00:12:12.845 { 00:12:12.845 "code": -32602, 00:12:12.845 "message": "Invalid cntlid range [65520-65519]" 00:12:12.845 }' 00:12:12.845 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:12.845 { 00:12:12.845 "nqn": "nqn.2016-06.io.spdk:cnode14218", 00:12:12.845 "min_cntlid": 65520, 00:12:12.845 "method": "nvmf_create_subsystem", 00:12:12.845 "req_id": 1 00:12:12.845 } 00:12:12.845 Got JSON-RPC error response 00:12:12.845 response: 00:12:12.845 { 00:12:12.845 "code": -32602, 00:12:12.845 "message": "Invalid cntlid range [65520-65519]" 00:12:12.845 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.845 01:40:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17078 -I 0 00:12:13.102 [2024-05-15 01:40:37.007225] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17078: invalid cntlid range [1-0] 00:12:13.102 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:13.102 { 00:12:13.102 "nqn": "nqn.2016-06.io.spdk:cnode17078", 00:12:13.102 "max_cntlid": 0, 00:12:13.102 "method": "nvmf_create_subsystem", 00:12:13.102 "req_id": 1 00:12:13.102 } 00:12:13.102 Got JSON-RPC error response 00:12:13.102 response: 00:12:13.102 { 00:12:13.102 "code": -32602, 00:12:13.102 "message": "Invalid cntlid range [1-0]" 00:12:13.102 }' 00:12:13.103 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:13.103 { 00:12:13.103 "nqn": "nqn.2016-06.io.spdk:cnode17078", 00:12:13.103 "max_cntlid": 0, 00:12:13.103 "method": "nvmf_create_subsystem", 00:12:13.103 "req_id": 1 00:12:13.103 } 00:12:13.103 Got JSON-RPC error response 00:12:13.103 response: 00:12:13.103 { 00:12:13.103 "code": -32602, 00:12:13.103 "message": "Invalid cntlid range [1-0]" 00:12:13.103 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.103 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14156 -I 65520 00:12:13.360 [2024-05-15 01:40:37.256011] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14156: invalid cntlid range [1-65520] 00:12:13.360 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:13.360 { 00:12:13.360 "nqn": "nqn.2016-06.io.spdk:cnode14156", 00:12:13.360 "max_cntlid": 65520, 00:12:13.360 "method": "nvmf_create_subsystem", 00:12:13.360 "req_id": 1 00:12:13.360 } 00:12:13.360 Got JSON-RPC error response 00:12:13.360 response: 00:12:13.360 { 00:12:13.360 "code": -32602, 00:12:13.360 "message": "Invalid cntlid range [1-65520]" 00:12:13.360 }' 00:12:13.360 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:13.360 { 00:12:13.360 "nqn": "nqn.2016-06.io.spdk:cnode14156", 00:12:13.360 "max_cntlid": 65520, 00:12:13.360 "method": "nvmf_create_subsystem", 00:12:13.360 "req_id": 1 00:12:13.360 } 00:12:13.360 Got JSON-RPC error response 00:12:13.360 response: 00:12:13.360 { 00:12:13.360 "code": -32602, 00:12:13.360 "message": "Invalid cntlid range [1-65520]" 00:12:13.360 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.360 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10242 -i 6 -I 5 00:12:13.617 [2024-05-15 01:40:37.500848] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10242: invalid cntlid range [6-5] 00:12:13.617 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:13.617 { 00:12:13.617 "nqn": "nqn.2016-06.io.spdk:cnode10242", 00:12:13.617 "min_cntlid": 6, 00:12:13.617 "max_cntlid": 5, 00:12:13.617 "method": "nvmf_create_subsystem", 00:12:13.617 "req_id": 1 00:12:13.617 } 00:12:13.617 Got JSON-RPC error response 00:12:13.617 response: 00:12:13.617 { 00:12:13.617 "code": -32602, 00:12:13.617 "message": "Invalid cntlid range [6-5]" 00:12:13.617 }' 00:12:13.617 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:13.617 { 00:12:13.617 "nqn": "nqn.2016-06.io.spdk:cnode10242", 00:12:13.617 "min_cntlid": 6, 00:12:13.617 "max_cntlid": 5, 00:12:13.617 "method": "nvmf_create_subsystem", 00:12:13.617 "req_id": 1 00:12:13.617 } 00:12:13.617 Got JSON-RPC error response 00:12:13.617 response: 00:12:13.617 { 00:12:13.617 "code": -32602, 00:12:13.617 "message": "Invalid cntlid range [6-5]" 00:12:13.617 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:13.617 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:13.875 { 00:12:13.875 "name": "foobar", 00:12:13.875 "method": "nvmf_delete_target", 00:12:13.875 "req_id": 1 00:12:13.875 } 00:12:13.875 Got JSON-RPC error response 00:12:13.875 response: 00:12:13.875 { 00:12:13.875 "code": -32602, 00:12:13.875 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:13.875 }' 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:13.875 { 00:12:13.875 "name": "foobar", 00:12:13.875 "method": "nvmf_delete_target", 00:12:13.875 "req_id": 1 00:12:13.875 } 00:12:13.875 Got JSON-RPC error response 00:12:13.875 response: 00:12:13.875 { 00:12:13.875 "code": -32602, 00:12:13.875 "message": "The specified target doesn't exist, cannot delete it." 00:12:13.875 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.875 rmmod nvme_tcp 00:12:13.875 rmmod nvme_fabrics 00:12:13.875 rmmod nvme_keyring 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3981808 ']' 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3981808 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 3981808 ']' 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 3981808 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3981808 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3981808' 00:12:13.875 killing process with pid 3981808 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 3981808 00:12:13.875 [2024-05-15 01:40:37.712592] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:13.875 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 3981808 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.132 01:40:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.669 01:40:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:16.669 00:12:16.669 real 0m9.171s 00:12:16.669 user 0m20.323s 00:12:16.669 sys 0m2.795s 00:12:16.669 01:40:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:16.669 01:40:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:16.669 ************************************ 00:12:16.669 END TEST nvmf_invalid 00:12:16.669 ************************************ 00:12:16.669 01:40:40 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:16.669 01:40:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:16.669 01:40:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:16.669 01:40:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.669 ************************************ 00:12:16.669 START TEST nvmf_abort 00:12:16.669 ************************************ 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:16.669 * Looking for test storage... 00:12:16.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.669 01:40:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.670 01:40:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.202 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:19.203 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:19.203 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:19.203 Found net devices under 0000:09:00.0: cvl_0_0 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:19.203 Found net devices under 0000:09:00.1: cvl_0_1 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:12:19.203 00:12:19.203 --- 10.0.0.2 ping statistics --- 00:12:19.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.203 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:12:19.203 00:12:19.203 --- 10.0.0.1 ping statistics --- 00:12:19.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.203 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3984742 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3984742 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 3984742 ']' 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:19.203 01:40:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.203 [2024-05-15 01:40:42.784920] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:12:19.203 [2024-05-15 01:40:42.785012] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.203 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.203 [2024-05-15 01:40:42.864533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.203 [2024-05-15 01:40:42.956892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.203 [2024-05-15 01:40:42.956955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.203 [2024-05-15 01:40:42.956983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.203 [2024-05-15 01:40:42.956996] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.203 [2024-05-15 01:40:42.957008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.203 [2024-05-15 01:40:42.957104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.203 [2024-05-15 01:40:42.957159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.203 [2024-05-15 01:40:42.957161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.203 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.203 [2024-05-15 01:40:43.111817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.204 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.204 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:19.204 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.204 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.462 Malloc0 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.462 Delay0 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.462 [2024-05-15 01:40:43.185673] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:19.462 [2024-05-15 01:40:43.185990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.462 01:40:43 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:19.462 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.462 [2024-05-15 01:40:43.331330] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:21.988 Initializing NVMe Controllers 00:12:21.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:21.988 controller IO queue size 128 less than required 00:12:21.988 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:21.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:21.988 Initialization complete. Launching workers. 00:12:21.988 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31423 00:12:21.988 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31484, failed to submit 62 00:12:21.988 success 31427, unsuccess 57, failed 0 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.988 rmmod nvme_tcp 00:12:21.988 rmmod nvme_fabrics 00:12:21.988 rmmod nvme_keyring 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3984742 ']' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3984742 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 3984742 ']' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 3984742 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3984742 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3984742' 00:12:21.988 killing process with pid 3984742 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 3984742 00:12:21.988 [2024-05-15 01:40:45.468622] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 3984742 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.988 01:40:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.894 01:40:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.894 00:12:23.894 real 0m7.736s 00:12:23.894 user 0m10.654s 00:12:23.894 sys 0m2.831s 00:12:23.894 01:40:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:23.894 01:40:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:23.894 ************************************ 00:12:23.894 END TEST nvmf_abort 00:12:23.894 ************************************ 00:12:23.894 01:40:47 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:23.894 01:40:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:23.894 01:40:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:23.894 01:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.894 ************************************ 00:12:23.894 START TEST nvmf_ns_hotplug_stress 00:12:23.894 ************************************ 00:12:23.894 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:24.152 * Looking for test storage... 00:12:24.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.152 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.153 01:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:26.683 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:26.683 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.683 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:26.684 Found net devices under 0000:09:00.0: cvl_0_0 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:26.684 Found net devices under 0000:09:00.1: cvl_0_1 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:12:26.684 00:12:26.684 --- 10.0.0.2 ping statistics --- 00:12:26.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.684 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:12:26.684 00:12:26.684 --- 10.0.0.1 ping statistics --- 00:12:26.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.684 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3987253 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3987253 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 3987253 ']' 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:26.684 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.684 [2024-05-15 01:40:50.442990] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:12:26.684 [2024-05-15 01:40:50.443081] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.684 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.684 [2024-05-15 01:40:50.529320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.943 [2024-05-15 01:40:50.614053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.943 [2024-05-15 01:40:50.614104] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.943 [2024-05-15 01:40:50.614127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.943 [2024-05-15 01:40:50.614139] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.943 [2024-05-15 01:40:50.614150] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.943 [2024-05-15 01:40:50.614256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.943 [2024-05-15 01:40:50.614311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.943 [2024-05-15 01:40:50.614314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:26.943 01:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:27.201 [2024-05-15 01:40:50.995699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.201 01:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.457 01:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.714 [2024-05-15 01:40:51.482117] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:27.714 [2024-05-15 01:40:51.482425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.714 01:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:27.971 01:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:28.229 Malloc0 00:12:28.229 01:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:28.486 Delay0 00:12:28.486 01:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.744 01:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:29.002 NULL1 00:12:29.002 01:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:29.259 01:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3987671 00:12:29.259 01:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:29.259 01:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:29.259 01:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.259 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.639 Read completed with error (sct=0, sc=11) 00:12:30.639 01:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.639 01:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:30.640 01:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:30.897 true 00:12:30.897 01:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:30.897 01:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.829 01:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.087 01:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:32.087 01:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:32.344 true 00:12:32.344 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:32.344 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.601 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.859 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:32.859 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:33.116 true 00:12:33.116 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:33.116 01:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.373 01:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.662 01:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:33.662 01:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:33.662 true 00:12:33.919 01:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:33.919 01:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.850 01:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.108 01:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:35.108 01:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:35.366 true 00:12:35.366 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:35.366 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.623 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.881 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:35.881 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:36.138 true 00:12:36.138 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:36.138 01:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.070 01:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.070 01:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:37.070 01:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:37.328 true 00:12:37.328 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:37.328 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.585 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.842 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:37.842 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:38.099 true 00:12:38.099 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:38.099 01:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.032 01:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.290 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:39.290 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:39.547 true 00:12:39.547 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:39.547 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.804 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.059 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:40.059 01:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:40.316 true 00:12:40.316 01:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:40.316 01:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.245 01:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.502 01:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:41.502 01:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:41.760 true 00:12:41.760 01:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:41.760 01:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.017 01:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.274 01:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:42.274 01:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:42.531 true 00:12:42.531 01:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:42.531 01:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.464 01:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:43.464 01:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:43.464 01:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:43.721 true 00:12:43.721 01:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:43.722 01:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.979 01:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.237 01:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:44.237 01:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:44.494 true 00:12:44.494 01:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:44.494 01:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.425 01:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.683 01:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:45.683 01:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:45.940 true 00:12:45.940 01:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:45.940 01:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.198 01:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.455 01:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:46.455 01:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:46.713 true 00:12:46.713 01:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:46.713 01:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.645 01:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.902 01:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:47.903 01:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:48.160 true 00:12:48.160 01:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:48.160 01:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.417 01:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.674 01:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:48.674 01:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:48.931 true 00:12:48.931 01:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:48.931 01:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.863 01:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.121 01:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:50.121 01:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:50.378 true 00:12:50.378 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:50.378 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.636 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.894 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:50.894 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:51.152 true 00:12:51.152 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:51.152 01:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.084 01:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.341 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:52.341 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:52.599 true 00:12:52.599 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:52.599 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.856 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.113 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:53.114 01:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:53.114 true 00:12:53.114 01:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:53.114 01:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.047 01:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.305 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:54.305 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:54.563 true 00:12:54.563 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:54.563 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.820 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.077 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:55.077 01:41:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:55.335 true 00:12:55.335 01:41:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:55.335 01:41:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.266 01:41:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.524 01:41:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:56.524 01:41:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:56.782 true 00:12:56.782 01:41:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:56.782 01:41:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.039 01:41:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.296 01:41:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:57.296 01:41:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:57.554 true 00:12:57.554 01:41:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:57.554 01:41:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.486 01:41:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.743 01:41:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:58.743 01:41:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:59.001 true 00:12:59.001 01:41:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:59.001 01:41:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.259 01:41:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.517 01:41:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:59.517 01:41:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:59.774 true 00:12:59.774 01:41:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:12:59.774 01:41:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.708 01:41:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.708 Initializing NVMe Controllers 00:13:00.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:00.708 Controller IO queue size 128, less than required. 00:13:00.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:00.708 Controller IO queue size 128, less than required. 00:13:00.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:00.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:00.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:00.708 Initialization complete. Launching workers. 00:13:00.708 ======================================================== 00:13:00.708 Latency(us) 00:13:00.708 Device Information : IOPS MiB/s Average min max 00:13:00.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 815.61 0.40 87439.55 2694.02 1089449.01 00:13:00.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11100.29 5.42 11532.36 3750.31 450102.65 00:13:00.708 ======================================================== 00:13:00.708 Total : 11915.90 5.82 16727.99 2694.02 1089449.01 00:13:00.708 00:13:00.708 01:41:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:00.708 01:41:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:00.966 true 00:13:00.966 01:41:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3987671 00:13:00.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3987671) - No such process 00:13:00.966 01:41:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3987671 00:13:00.966 01:41:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.224 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:01.482 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:01.482 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:01.482 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:01.482 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.482 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:01.739 null0 00:13:01.739 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:01.739 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.739 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:01.997 null1 00:13:01.997 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:01.997 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:01.997 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:02.254 null2 00:13:02.254 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:02.254 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:02.254 01:41:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:02.512 null3 00:13:02.512 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:02.512 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:02.512 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:02.769 null4 00:13:02.769 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:02.769 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:02.769 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:03.027 null5 00:13:03.027 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.027 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.027 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:03.027 null6 00:13:03.027 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.027 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.027 01:41:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:03.592 null7 00:13:03.592 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:03.592 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:03.592 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:03.592 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.592 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.592 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3991724 3991725 3991727 3991729 3991731 3991733 3991735 3991737 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:03.593 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.851 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.109 01:41:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:04.367 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:04.367 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:04.367 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.367 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:04.368 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.368 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:04.368 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:04.368 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:04.626 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:04.884 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.142 01:41:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:05.399 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:05.656 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.914 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.172 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.173 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.173 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.173 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.173 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.173 01:41:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.173 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:06.431 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:06.689 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:06.690 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.948 01:41:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.206 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.525 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.783 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.041 01:41:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.299 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.557 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.815 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.073 rmmod nvme_tcp 00:13:09.073 rmmod nvme_fabrics 00:13:09.073 rmmod nvme_keyring 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3987253 ']' 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3987253 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 3987253 ']' 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 3987253 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3987253 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3987253' 00:13:09.073 killing process with pid 3987253 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 3987253 00:13:09.073 [2024-05-15 01:41:32.838527] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:09.073 01:41:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 3987253 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.332 01:41:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.232 01:41:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.232 00:13:11.232 real 0m47.284s 00:13:11.232 user 3m33.634s 00:13:11.232 sys 0m16.610s 00:13:11.232 01:41:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:11.232 01:41:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.232 ************************************ 00:13:11.232 END TEST nvmf_ns_hotplug_stress 00:13:11.232 ************************************ 00:13:11.232 01:41:35 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:11.232 01:41:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:11.232 01:41:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:11.232 01:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.491 ************************************ 00:13:11.491 START TEST nvmf_connect_stress 00:13:11.491 ************************************ 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:11.491 * Looking for test storage... 00:13:11.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:11.491 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.492 01:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.023 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:14.024 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:14.024 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:14.024 Found net devices under 0000:09:00.0: cvl_0_0 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:14.024 Found net devices under 0000:09:00.1: cvl_0_1 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.024 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:13:14.024 00:13:14.024 --- 10.0.0.2 ping statistics --- 00:13:14.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.024 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:13:14.025 00:13:14.025 --- 10.0.0.1 ping statistics --- 00:13:14.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.025 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3994895 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3994895 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 3994895 ']' 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:14.025 01:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.025 [2024-05-15 01:41:37.869804] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:13:14.025 [2024-05-15 01:41:37.869891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.025 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.025 [2024-05-15 01:41:37.948584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.284 [2024-05-15 01:41:38.035453] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.284 [2024-05-15 01:41:38.035514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.284 [2024-05-15 01:41:38.035541] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.284 [2024-05-15 01:41:38.035555] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.284 [2024-05-15 01:41:38.035568] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.284 [2024-05-15 01:41:38.035657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.284 [2024-05-15 01:41:38.035774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.284 [2024-05-15 01:41:38.035776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.284 [2024-05-15 01:41:38.184942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.284 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.284 [2024-05-15 01:41:38.202260] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:14.542 [2024-05-15 01:41:38.220391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.542 NULL1 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3994917 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.542 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.543 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.799 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.799 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:14.799 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.799 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.799 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.055 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:15.055 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:15.055 01:41:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.055 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:15.055 01:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.619 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:15.619 01:41:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:15.619 01:41:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.619 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:15.619 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.877 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:15.877 01:41:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:15.877 01:41:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.877 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:15.877 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.136 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:16.136 01:41:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:16.136 01:41:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.136 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:16.136 01:41:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.393 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:16.393 01:41:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:16.393 01:41:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.393 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:16.393 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.651 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:16.651 01:41:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:16.651 01:41:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.651 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:16.651 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.215 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.215 01:41:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:17.215 01:41:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.215 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.215 01:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.472 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.472 01:41:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:17.472 01:41:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.472 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.472 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.730 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.730 01:41:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:17.730 01:41:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.730 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.730 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.988 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.988 01:41:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:17.988 01:41:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.988 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.988 01:41:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.257 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.257 01:41:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:18.257 01:41:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.257 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.257 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.821 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.821 01:41:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:18.821 01:41:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.821 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.821 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.078 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.078 01:41:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:19.078 01:41:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.078 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.078 01:41:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.335 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.335 01:41:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:19.335 01:41:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.335 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.335 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.592 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.592 01:41:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:19.592 01:41:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.592 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.592 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.848 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:19.848 01:41:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:19.848 01:41:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.848 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:19.848 01:41:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.413 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.413 01:41:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:20.413 01:41:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.413 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.413 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.670 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.670 01:41:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:20.670 01:41:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.670 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.670 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.927 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:20.927 01:41:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:20.927 01:41:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.927 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:20.927 01:41:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.186 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.186 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:21.186 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.186 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.186 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.443 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:21.443 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:21.443 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.443 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:21.443 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.007 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.007 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:22.007 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.007 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.007 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.265 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.265 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:22.265 01:41:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.265 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.265 01:41:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.522 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.522 01:41:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:22.522 01:41:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.522 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.522 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.780 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:22.780 01:41:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:22.780 01:41:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.780 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:22.780 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.037 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.037 01:41:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:23.037 01:41:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.037 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.037 01:41:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.604 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.604 01:41:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:23.604 01:41:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.604 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.604 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.861 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:23.861 01:41:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:23.861 01:41:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.861 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:23.861 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.118 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.118 01:41:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:24.118 01:41:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.118 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.118 01:41:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.376 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.376 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:24.376 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.376 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:24.376 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.634 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.634 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:24.634 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3994917 00:13:24.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3994917) - No such process 00:13:24.634 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3994917 00:13:24.634 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:24.892 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.893 rmmod nvme_tcp 00:13:24.893 rmmod nvme_fabrics 00:13:24.893 rmmod nvme_keyring 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3994895 ']' 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3994895 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 3994895 ']' 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 3994895 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3994895 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3994895' 00:13:24.893 killing process with pid 3994895 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 3994895 00:13:24.893 [2024-05-15 01:41:48.628704] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:24.893 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 3994895 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.152 01:41:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.082 01:41:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.082 00:13:27.082 real 0m15.712s 00:13:27.082 user 0m38.424s 00:13:27.082 sys 0m6.184s 00:13:27.082 01:41:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:27.082 01:41:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.082 ************************************ 00:13:27.082 END TEST nvmf_connect_stress 00:13:27.082 ************************************ 00:13:27.082 01:41:50 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:27.082 01:41:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:27.082 01:41:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:27.082 01:41:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.082 ************************************ 00:13:27.082 START TEST nvmf_fused_ordering 00:13:27.082 ************************************ 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:27.082 * Looking for test storage... 00:13:27.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.082 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.083 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.083 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.083 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.083 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.083 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.083 01:41:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.083 01:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.610 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:29.611 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:29.611 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:29.611 Found net devices under 0000:09:00.0: cvl_0_0 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:29.611 Found net devices under 0000:09:00.1: cvl_0_1 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.611 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:13:29.869 00:13:29.869 --- 10.0.0.2 ping statistics --- 00:13:29.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.869 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:13:29.869 00:13:29.869 --- 10.0.0.1 ping statistics --- 00:13:29.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.869 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3998476 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3998476 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 3998476 ']' 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:29.869 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:29.869 [2024-05-15 01:41:53.658681] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:13:29.869 [2024-05-15 01:41:53.658761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.869 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.869 [2024-05-15 01:41:53.732962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.128 [2024-05-15 01:41:53.817454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.128 [2024-05-15 01:41:53.817517] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.128 [2024-05-15 01:41:53.817530] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.128 [2024-05-15 01:41:53.817541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.128 [2024-05-15 01:41:53.817551] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.128 [2024-05-15 01:41:53.817596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 [2024-05-15 01:41:53.955348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 [2024-05-15 01:41:53.971316] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:30.128 [2024-05-15 01:41:53.971612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 NULL1 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.128 01:41:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:30.128 [2024-05-15 01:41:54.015449] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:13:30.128 [2024-05-15 01:41:54.015492] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998501 ] 00:13:30.128 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.695 Attached to nqn.2016-06.io.spdk:cnode1 00:13:30.695 Namespace ID: 1 size: 1GB 00:13:30.695 fused_ordering(0) 00:13:30.695 fused_ordering(1) 00:13:30.695 fused_ordering(2) 00:13:30.695 fused_ordering(3) 00:13:30.695 fused_ordering(4) 00:13:30.695 fused_ordering(5) 00:13:30.695 fused_ordering(6) 00:13:30.695 fused_ordering(7) 00:13:30.695 fused_ordering(8) 00:13:30.695 fused_ordering(9) 00:13:30.695 fused_ordering(10) 00:13:30.695 fused_ordering(11) 00:13:30.695 fused_ordering(12) 00:13:30.695 fused_ordering(13) 00:13:30.695 fused_ordering(14) 00:13:30.695 fused_ordering(15) 00:13:30.695 fused_ordering(16) 00:13:30.695 fused_ordering(17) 00:13:30.695 fused_ordering(18) 00:13:30.695 fused_ordering(19) 00:13:30.695 fused_ordering(20) 00:13:30.695 fused_ordering(21) 00:13:30.695 fused_ordering(22) 00:13:30.695 fused_ordering(23) 00:13:30.695 fused_ordering(24) 00:13:30.695 fused_ordering(25) 00:13:30.695 fused_ordering(26) 00:13:30.695 fused_ordering(27) 00:13:30.695 fused_ordering(28) 00:13:30.695 fused_ordering(29) 00:13:30.695 fused_ordering(30) 00:13:30.695 fused_ordering(31) 00:13:30.695 fused_ordering(32) 00:13:30.695 fused_ordering(33) 00:13:30.695 fused_ordering(34) 00:13:30.695 fused_ordering(35) 00:13:30.695 fused_ordering(36) 00:13:30.695 fused_ordering(37) 00:13:30.695 fused_ordering(38) 00:13:30.695 fused_ordering(39) 00:13:30.695 fused_ordering(40) 00:13:30.695 fused_ordering(41) 00:13:30.695 fused_ordering(42) 00:13:30.695 fused_ordering(43) 00:13:30.695 fused_ordering(44) 00:13:30.695 fused_ordering(45) 00:13:30.695 fused_ordering(46) 00:13:30.695 fused_ordering(47) 00:13:30.695 fused_ordering(48) 00:13:30.695 fused_ordering(49) 00:13:30.695 fused_ordering(50) 00:13:30.695 fused_ordering(51) 00:13:30.695 fused_ordering(52) 00:13:30.695 fused_ordering(53) 00:13:30.695 fused_ordering(54) 00:13:30.695 fused_ordering(55) 00:13:30.695 fused_ordering(56) 00:13:30.695 fused_ordering(57) 00:13:30.695 fused_ordering(58) 00:13:30.695 fused_ordering(59) 00:13:30.695 fused_ordering(60) 00:13:30.695 fused_ordering(61) 00:13:30.695 fused_ordering(62) 00:13:30.695 fused_ordering(63) 00:13:30.695 fused_ordering(64) 00:13:30.695 fused_ordering(65) 00:13:30.695 fused_ordering(66) 00:13:30.695 fused_ordering(67) 00:13:30.695 fused_ordering(68) 00:13:30.695 fused_ordering(69) 00:13:30.695 fused_ordering(70) 00:13:30.695 fused_ordering(71) 00:13:30.695 fused_ordering(72) 00:13:30.695 fused_ordering(73) 00:13:30.695 fused_ordering(74) 00:13:30.695 fused_ordering(75) 00:13:30.695 fused_ordering(76) 00:13:30.695 fused_ordering(77) 00:13:30.695 fused_ordering(78) 00:13:30.695 fused_ordering(79) 00:13:30.695 fused_ordering(80) 00:13:30.695 fused_ordering(81) 00:13:30.695 fused_ordering(82) 00:13:30.695 fused_ordering(83) 00:13:30.695 fused_ordering(84) 00:13:30.695 fused_ordering(85) 00:13:30.695 fused_ordering(86) 00:13:30.695 fused_ordering(87) 00:13:30.695 fused_ordering(88) 00:13:30.695 fused_ordering(89) 00:13:30.695 fused_ordering(90) 00:13:30.695 fused_ordering(91) 00:13:30.695 fused_ordering(92) 00:13:30.695 fused_ordering(93) 00:13:30.695 fused_ordering(94) 00:13:30.695 fused_ordering(95) 00:13:30.695 fused_ordering(96) 00:13:30.695 fused_ordering(97) 00:13:30.695 fused_ordering(98) 00:13:30.695 fused_ordering(99) 00:13:30.696 fused_ordering(100) 00:13:30.696 fused_ordering(101) 00:13:30.696 fused_ordering(102) 00:13:30.696 fused_ordering(103) 00:13:30.696 fused_ordering(104) 00:13:30.696 fused_ordering(105) 00:13:30.696 fused_ordering(106) 00:13:30.696 fused_ordering(107) 00:13:30.696 fused_ordering(108) 00:13:30.696 fused_ordering(109) 00:13:30.696 fused_ordering(110) 00:13:30.696 fused_ordering(111) 00:13:30.696 fused_ordering(112) 00:13:30.696 fused_ordering(113) 00:13:30.696 fused_ordering(114) 00:13:30.696 fused_ordering(115) 00:13:30.696 fused_ordering(116) 00:13:30.696 fused_ordering(117) 00:13:30.696 fused_ordering(118) 00:13:30.696 fused_ordering(119) 00:13:30.696 fused_ordering(120) 00:13:30.696 fused_ordering(121) 00:13:30.696 fused_ordering(122) 00:13:30.696 fused_ordering(123) 00:13:30.696 fused_ordering(124) 00:13:30.696 fused_ordering(125) 00:13:30.696 fused_ordering(126) 00:13:30.696 fused_ordering(127) 00:13:30.696 fused_ordering(128) 00:13:30.696 fused_ordering(129) 00:13:30.696 fused_ordering(130) 00:13:30.696 fused_ordering(131) 00:13:30.696 fused_ordering(132) 00:13:30.696 fused_ordering(133) 00:13:30.696 fused_ordering(134) 00:13:30.696 fused_ordering(135) 00:13:30.696 fused_ordering(136) 00:13:30.696 fused_ordering(137) 00:13:30.696 fused_ordering(138) 00:13:30.696 fused_ordering(139) 00:13:30.696 fused_ordering(140) 00:13:30.696 fused_ordering(141) 00:13:30.696 fused_ordering(142) 00:13:30.696 fused_ordering(143) 00:13:30.696 fused_ordering(144) 00:13:30.696 fused_ordering(145) 00:13:30.696 fused_ordering(146) 00:13:30.696 fused_ordering(147) 00:13:30.696 fused_ordering(148) 00:13:30.696 fused_ordering(149) 00:13:30.696 fused_ordering(150) 00:13:30.696 fused_ordering(151) 00:13:30.696 fused_ordering(152) 00:13:30.696 fused_ordering(153) 00:13:30.696 fused_ordering(154) 00:13:30.696 fused_ordering(155) 00:13:30.696 fused_ordering(156) 00:13:30.696 fused_ordering(157) 00:13:30.696 fused_ordering(158) 00:13:30.696 fused_ordering(159) 00:13:30.696 fused_ordering(160) 00:13:30.696 fused_ordering(161) 00:13:30.696 fused_ordering(162) 00:13:30.696 fused_ordering(163) 00:13:30.696 fused_ordering(164) 00:13:30.696 fused_ordering(165) 00:13:30.696 fused_ordering(166) 00:13:30.696 fused_ordering(167) 00:13:30.696 fused_ordering(168) 00:13:30.696 fused_ordering(169) 00:13:30.696 fused_ordering(170) 00:13:30.696 fused_ordering(171) 00:13:30.696 fused_ordering(172) 00:13:30.696 fused_ordering(173) 00:13:30.696 fused_ordering(174) 00:13:30.696 fused_ordering(175) 00:13:30.696 fused_ordering(176) 00:13:30.696 fused_ordering(177) 00:13:30.696 fused_ordering(178) 00:13:30.696 fused_ordering(179) 00:13:30.696 fused_ordering(180) 00:13:30.696 fused_ordering(181) 00:13:30.696 fused_ordering(182) 00:13:30.696 fused_ordering(183) 00:13:30.696 fused_ordering(184) 00:13:30.696 fused_ordering(185) 00:13:30.696 fused_ordering(186) 00:13:30.696 fused_ordering(187) 00:13:30.696 fused_ordering(188) 00:13:30.696 fused_ordering(189) 00:13:30.696 fused_ordering(190) 00:13:30.696 fused_ordering(191) 00:13:30.696 fused_ordering(192) 00:13:30.696 fused_ordering(193) 00:13:30.696 fused_ordering(194) 00:13:30.696 fused_ordering(195) 00:13:30.696 fused_ordering(196) 00:13:30.696 fused_ordering(197) 00:13:30.696 fused_ordering(198) 00:13:30.696 fused_ordering(199) 00:13:30.696 fused_ordering(200) 00:13:30.696 fused_ordering(201) 00:13:30.696 fused_ordering(202) 00:13:30.696 fused_ordering(203) 00:13:30.696 fused_ordering(204) 00:13:30.696 fused_ordering(205) 00:13:30.954 fused_ordering(206) 00:13:30.955 fused_ordering(207) 00:13:30.955 fused_ordering(208) 00:13:30.955 fused_ordering(209) 00:13:30.955 fused_ordering(210) 00:13:30.955 fused_ordering(211) 00:13:30.955 fused_ordering(212) 00:13:30.955 fused_ordering(213) 00:13:30.955 fused_ordering(214) 00:13:30.955 fused_ordering(215) 00:13:30.955 fused_ordering(216) 00:13:30.955 fused_ordering(217) 00:13:30.955 fused_ordering(218) 00:13:30.955 fused_ordering(219) 00:13:30.955 fused_ordering(220) 00:13:30.955 fused_ordering(221) 00:13:30.955 fused_ordering(222) 00:13:30.955 fused_ordering(223) 00:13:30.955 fused_ordering(224) 00:13:30.955 fused_ordering(225) 00:13:30.955 fused_ordering(226) 00:13:30.955 fused_ordering(227) 00:13:30.955 fused_ordering(228) 00:13:30.955 fused_ordering(229) 00:13:30.955 fused_ordering(230) 00:13:30.955 fused_ordering(231) 00:13:30.955 fused_ordering(232) 00:13:30.955 fused_ordering(233) 00:13:30.955 fused_ordering(234) 00:13:30.955 fused_ordering(235) 00:13:30.955 fused_ordering(236) 00:13:30.955 fused_ordering(237) 00:13:30.955 fused_ordering(238) 00:13:30.955 fused_ordering(239) 00:13:30.955 fused_ordering(240) 00:13:30.955 fused_ordering(241) 00:13:30.955 fused_ordering(242) 00:13:30.955 fused_ordering(243) 00:13:30.955 fused_ordering(244) 00:13:30.955 fused_ordering(245) 00:13:30.955 fused_ordering(246) 00:13:30.955 fused_ordering(247) 00:13:30.955 fused_ordering(248) 00:13:30.955 fused_ordering(249) 00:13:30.955 fused_ordering(250) 00:13:30.955 fused_ordering(251) 00:13:30.955 fused_ordering(252) 00:13:30.955 fused_ordering(253) 00:13:30.955 fused_ordering(254) 00:13:30.955 fused_ordering(255) 00:13:30.955 fused_ordering(256) 00:13:30.955 fused_ordering(257) 00:13:30.955 fused_ordering(258) 00:13:30.955 fused_ordering(259) 00:13:30.955 fused_ordering(260) 00:13:30.955 fused_ordering(261) 00:13:30.955 fused_ordering(262) 00:13:30.955 fused_ordering(263) 00:13:30.955 fused_ordering(264) 00:13:30.955 fused_ordering(265) 00:13:30.955 fused_ordering(266) 00:13:30.955 fused_ordering(267) 00:13:30.955 fused_ordering(268) 00:13:30.955 fused_ordering(269) 00:13:30.955 fused_ordering(270) 00:13:30.955 fused_ordering(271) 00:13:30.955 fused_ordering(272) 00:13:30.955 fused_ordering(273) 00:13:30.955 fused_ordering(274) 00:13:30.955 fused_ordering(275) 00:13:30.955 fused_ordering(276) 00:13:30.955 fused_ordering(277) 00:13:30.955 fused_ordering(278) 00:13:30.955 fused_ordering(279) 00:13:30.955 fused_ordering(280) 00:13:30.955 fused_ordering(281) 00:13:30.955 fused_ordering(282) 00:13:30.955 fused_ordering(283) 00:13:30.955 fused_ordering(284) 00:13:30.955 fused_ordering(285) 00:13:30.955 fused_ordering(286) 00:13:30.955 fused_ordering(287) 00:13:30.955 fused_ordering(288) 00:13:30.955 fused_ordering(289) 00:13:30.955 fused_ordering(290) 00:13:30.955 fused_ordering(291) 00:13:30.955 fused_ordering(292) 00:13:30.955 fused_ordering(293) 00:13:30.955 fused_ordering(294) 00:13:30.955 fused_ordering(295) 00:13:30.955 fused_ordering(296) 00:13:30.955 fused_ordering(297) 00:13:30.955 fused_ordering(298) 00:13:30.955 fused_ordering(299) 00:13:30.955 fused_ordering(300) 00:13:30.955 fused_ordering(301) 00:13:30.955 fused_ordering(302) 00:13:30.955 fused_ordering(303) 00:13:30.955 fused_ordering(304) 00:13:30.955 fused_ordering(305) 00:13:30.955 fused_ordering(306) 00:13:30.955 fused_ordering(307) 00:13:30.955 fused_ordering(308) 00:13:30.955 fused_ordering(309) 00:13:30.955 fused_ordering(310) 00:13:30.955 fused_ordering(311) 00:13:30.955 fused_ordering(312) 00:13:30.955 fused_ordering(313) 00:13:30.955 fused_ordering(314) 00:13:30.955 fused_ordering(315) 00:13:30.955 fused_ordering(316) 00:13:30.955 fused_ordering(317) 00:13:30.955 fused_ordering(318) 00:13:30.955 fused_ordering(319) 00:13:30.955 fused_ordering(320) 00:13:30.955 fused_ordering(321) 00:13:30.955 fused_ordering(322) 00:13:30.955 fused_ordering(323) 00:13:30.955 fused_ordering(324) 00:13:30.955 fused_ordering(325) 00:13:30.955 fused_ordering(326) 00:13:30.955 fused_ordering(327) 00:13:30.955 fused_ordering(328) 00:13:30.955 fused_ordering(329) 00:13:30.955 fused_ordering(330) 00:13:30.955 fused_ordering(331) 00:13:30.955 fused_ordering(332) 00:13:30.955 fused_ordering(333) 00:13:30.955 fused_ordering(334) 00:13:30.955 fused_ordering(335) 00:13:30.955 fused_ordering(336) 00:13:30.955 fused_ordering(337) 00:13:30.955 fused_ordering(338) 00:13:30.955 fused_ordering(339) 00:13:30.955 fused_ordering(340) 00:13:30.955 fused_ordering(341) 00:13:30.955 fused_ordering(342) 00:13:30.955 fused_ordering(343) 00:13:30.955 fused_ordering(344) 00:13:30.955 fused_ordering(345) 00:13:30.955 fused_ordering(346) 00:13:30.955 fused_ordering(347) 00:13:30.955 fused_ordering(348) 00:13:30.955 fused_ordering(349) 00:13:30.955 fused_ordering(350) 00:13:30.955 fused_ordering(351) 00:13:30.955 fused_ordering(352) 00:13:30.955 fused_ordering(353) 00:13:30.955 fused_ordering(354) 00:13:30.955 fused_ordering(355) 00:13:30.955 fused_ordering(356) 00:13:30.955 fused_ordering(357) 00:13:30.955 fused_ordering(358) 00:13:30.955 fused_ordering(359) 00:13:30.955 fused_ordering(360) 00:13:30.955 fused_ordering(361) 00:13:30.955 fused_ordering(362) 00:13:30.955 fused_ordering(363) 00:13:30.955 fused_ordering(364) 00:13:30.955 fused_ordering(365) 00:13:30.955 fused_ordering(366) 00:13:30.955 fused_ordering(367) 00:13:30.955 fused_ordering(368) 00:13:30.955 fused_ordering(369) 00:13:30.955 fused_ordering(370) 00:13:30.955 fused_ordering(371) 00:13:30.955 fused_ordering(372) 00:13:30.955 fused_ordering(373) 00:13:30.955 fused_ordering(374) 00:13:30.955 fused_ordering(375) 00:13:30.955 fused_ordering(376) 00:13:30.955 fused_ordering(377) 00:13:30.955 fused_ordering(378) 00:13:30.955 fused_ordering(379) 00:13:30.955 fused_ordering(380) 00:13:30.955 fused_ordering(381) 00:13:30.955 fused_ordering(382) 00:13:30.955 fused_ordering(383) 00:13:30.955 fused_ordering(384) 00:13:30.955 fused_ordering(385) 00:13:30.955 fused_ordering(386) 00:13:30.955 fused_ordering(387) 00:13:30.955 fused_ordering(388) 00:13:30.955 fused_ordering(389) 00:13:30.955 fused_ordering(390) 00:13:30.955 fused_ordering(391) 00:13:30.955 fused_ordering(392) 00:13:30.955 fused_ordering(393) 00:13:30.955 fused_ordering(394) 00:13:30.955 fused_ordering(395) 00:13:30.955 fused_ordering(396) 00:13:30.955 fused_ordering(397) 00:13:30.955 fused_ordering(398) 00:13:30.955 fused_ordering(399) 00:13:30.955 fused_ordering(400) 00:13:30.955 fused_ordering(401) 00:13:30.955 fused_ordering(402) 00:13:30.955 fused_ordering(403) 00:13:30.955 fused_ordering(404) 00:13:30.955 fused_ordering(405) 00:13:30.955 fused_ordering(406) 00:13:30.955 fused_ordering(407) 00:13:30.955 fused_ordering(408) 00:13:30.955 fused_ordering(409) 00:13:30.955 fused_ordering(410) 00:13:31.522 fused_ordering(411) 00:13:31.522 fused_ordering(412) 00:13:31.522 fused_ordering(413) 00:13:31.522 fused_ordering(414) 00:13:31.522 fused_ordering(415) 00:13:31.522 fused_ordering(416) 00:13:31.522 fused_ordering(417) 00:13:31.522 fused_ordering(418) 00:13:31.522 fused_ordering(419) 00:13:31.522 fused_ordering(420) 00:13:31.522 fused_ordering(421) 00:13:31.522 fused_ordering(422) 00:13:31.522 fused_ordering(423) 00:13:31.522 fused_ordering(424) 00:13:31.522 fused_ordering(425) 00:13:31.522 fused_ordering(426) 00:13:31.522 fused_ordering(427) 00:13:31.522 fused_ordering(428) 00:13:31.522 fused_ordering(429) 00:13:31.522 fused_ordering(430) 00:13:31.522 fused_ordering(431) 00:13:31.522 fused_ordering(432) 00:13:31.522 fused_ordering(433) 00:13:31.522 fused_ordering(434) 00:13:31.522 fused_ordering(435) 00:13:31.522 fused_ordering(436) 00:13:31.522 fused_ordering(437) 00:13:31.522 fused_ordering(438) 00:13:31.522 fused_ordering(439) 00:13:31.522 fused_ordering(440) 00:13:31.522 fused_ordering(441) 00:13:31.522 fused_ordering(442) 00:13:31.522 fused_ordering(443) 00:13:31.522 fused_ordering(444) 00:13:31.522 fused_ordering(445) 00:13:31.522 fused_ordering(446) 00:13:31.522 fused_ordering(447) 00:13:31.522 fused_ordering(448) 00:13:31.522 fused_ordering(449) 00:13:31.522 fused_ordering(450) 00:13:31.522 fused_ordering(451) 00:13:31.522 fused_ordering(452) 00:13:31.522 fused_ordering(453) 00:13:31.522 fused_ordering(454) 00:13:31.522 fused_ordering(455) 00:13:31.522 fused_ordering(456) 00:13:31.522 fused_ordering(457) 00:13:31.522 fused_ordering(458) 00:13:31.522 fused_ordering(459) 00:13:31.522 fused_ordering(460) 00:13:31.522 fused_ordering(461) 00:13:31.522 fused_ordering(462) 00:13:31.522 fused_ordering(463) 00:13:31.522 fused_ordering(464) 00:13:31.522 fused_ordering(465) 00:13:31.522 fused_ordering(466) 00:13:31.522 fused_ordering(467) 00:13:31.522 fused_ordering(468) 00:13:31.522 fused_ordering(469) 00:13:31.522 fused_ordering(470) 00:13:31.522 fused_ordering(471) 00:13:31.522 fused_ordering(472) 00:13:31.522 fused_ordering(473) 00:13:31.522 fused_ordering(474) 00:13:31.522 fused_ordering(475) 00:13:31.522 fused_ordering(476) 00:13:31.522 fused_ordering(477) 00:13:31.522 fused_ordering(478) 00:13:31.522 fused_ordering(479) 00:13:31.522 fused_ordering(480) 00:13:31.522 fused_ordering(481) 00:13:31.522 fused_ordering(482) 00:13:31.522 fused_ordering(483) 00:13:31.522 fused_ordering(484) 00:13:31.522 fused_ordering(485) 00:13:31.522 fused_ordering(486) 00:13:31.522 fused_ordering(487) 00:13:31.522 fused_ordering(488) 00:13:31.522 fused_ordering(489) 00:13:31.522 fused_ordering(490) 00:13:31.522 fused_ordering(491) 00:13:31.522 fused_ordering(492) 00:13:31.522 fused_ordering(493) 00:13:31.522 fused_ordering(494) 00:13:31.522 fused_ordering(495) 00:13:31.522 fused_ordering(496) 00:13:31.522 fused_ordering(497) 00:13:31.522 fused_ordering(498) 00:13:31.522 fused_ordering(499) 00:13:31.522 fused_ordering(500) 00:13:31.522 fused_ordering(501) 00:13:31.522 fused_ordering(502) 00:13:31.522 fused_ordering(503) 00:13:31.522 fused_ordering(504) 00:13:31.522 fused_ordering(505) 00:13:31.522 fused_ordering(506) 00:13:31.522 fused_ordering(507) 00:13:31.522 fused_ordering(508) 00:13:31.522 fused_ordering(509) 00:13:31.522 fused_ordering(510) 00:13:31.522 fused_ordering(511) 00:13:31.522 fused_ordering(512) 00:13:31.522 fused_ordering(513) 00:13:31.522 fused_ordering(514) 00:13:31.522 fused_ordering(515) 00:13:31.522 fused_ordering(516) 00:13:31.522 fused_ordering(517) 00:13:31.522 fused_ordering(518) 00:13:31.522 fused_ordering(519) 00:13:31.522 fused_ordering(520) 00:13:31.522 fused_ordering(521) 00:13:31.522 fused_ordering(522) 00:13:31.522 fused_ordering(523) 00:13:31.522 fused_ordering(524) 00:13:31.522 fused_ordering(525) 00:13:31.522 fused_ordering(526) 00:13:31.522 fused_ordering(527) 00:13:31.522 fused_ordering(528) 00:13:31.522 fused_ordering(529) 00:13:31.522 fused_ordering(530) 00:13:31.522 fused_ordering(531) 00:13:31.522 fused_ordering(532) 00:13:31.522 fused_ordering(533) 00:13:31.522 fused_ordering(534) 00:13:31.522 fused_ordering(535) 00:13:31.522 fused_ordering(536) 00:13:31.522 fused_ordering(537) 00:13:31.522 fused_ordering(538) 00:13:31.522 fused_ordering(539) 00:13:31.522 fused_ordering(540) 00:13:31.522 fused_ordering(541) 00:13:31.522 fused_ordering(542) 00:13:31.522 fused_ordering(543) 00:13:31.522 fused_ordering(544) 00:13:31.522 fused_ordering(545) 00:13:31.523 fused_ordering(546) 00:13:31.523 fused_ordering(547) 00:13:31.523 fused_ordering(548) 00:13:31.523 fused_ordering(549) 00:13:31.523 fused_ordering(550) 00:13:31.523 fused_ordering(551) 00:13:31.523 fused_ordering(552) 00:13:31.523 fused_ordering(553) 00:13:31.523 fused_ordering(554) 00:13:31.523 fused_ordering(555) 00:13:31.523 fused_ordering(556) 00:13:31.523 fused_ordering(557) 00:13:31.523 fused_ordering(558) 00:13:31.523 fused_ordering(559) 00:13:31.523 fused_ordering(560) 00:13:31.523 fused_ordering(561) 00:13:31.523 fused_ordering(562) 00:13:31.523 fused_ordering(563) 00:13:31.523 fused_ordering(564) 00:13:31.523 fused_ordering(565) 00:13:31.523 fused_ordering(566) 00:13:31.523 fused_ordering(567) 00:13:31.523 fused_ordering(568) 00:13:31.523 fused_ordering(569) 00:13:31.523 fused_ordering(570) 00:13:31.523 fused_ordering(571) 00:13:31.523 fused_ordering(572) 00:13:31.523 fused_ordering(573) 00:13:31.523 fused_ordering(574) 00:13:31.523 fused_ordering(575) 00:13:31.523 fused_ordering(576) 00:13:31.523 fused_ordering(577) 00:13:31.523 fused_ordering(578) 00:13:31.523 fused_ordering(579) 00:13:31.523 fused_ordering(580) 00:13:31.523 fused_ordering(581) 00:13:31.523 fused_ordering(582) 00:13:31.523 fused_ordering(583) 00:13:31.523 fused_ordering(584) 00:13:31.523 fused_ordering(585) 00:13:31.523 fused_ordering(586) 00:13:31.523 fused_ordering(587) 00:13:31.523 fused_ordering(588) 00:13:31.523 fused_ordering(589) 00:13:31.523 fused_ordering(590) 00:13:31.523 fused_ordering(591) 00:13:31.523 fused_ordering(592) 00:13:31.523 fused_ordering(593) 00:13:31.523 fused_ordering(594) 00:13:31.523 fused_ordering(595) 00:13:31.523 fused_ordering(596) 00:13:31.523 fused_ordering(597) 00:13:31.523 fused_ordering(598) 00:13:31.523 fused_ordering(599) 00:13:31.523 fused_ordering(600) 00:13:31.523 fused_ordering(601) 00:13:31.523 fused_ordering(602) 00:13:31.523 fused_ordering(603) 00:13:31.523 fused_ordering(604) 00:13:31.523 fused_ordering(605) 00:13:31.523 fused_ordering(606) 00:13:31.523 fused_ordering(607) 00:13:31.523 fused_ordering(608) 00:13:31.523 fused_ordering(609) 00:13:31.523 fused_ordering(610) 00:13:31.523 fused_ordering(611) 00:13:31.523 fused_ordering(612) 00:13:31.523 fused_ordering(613) 00:13:31.523 fused_ordering(614) 00:13:31.523 fused_ordering(615) 00:13:32.091 fused_ordering(616) 00:13:32.091 fused_ordering(617) 00:13:32.091 fused_ordering(618) 00:13:32.091 fused_ordering(619) 00:13:32.091 fused_ordering(620) 00:13:32.091 fused_ordering(621) 00:13:32.091 fused_ordering(622) 00:13:32.091 fused_ordering(623) 00:13:32.091 fused_ordering(624) 00:13:32.091 fused_ordering(625) 00:13:32.091 fused_ordering(626) 00:13:32.091 fused_ordering(627) 00:13:32.091 fused_ordering(628) 00:13:32.091 fused_ordering(629) 00:13:32.091 fused_ordering(630) 00:13:32.091 fused_ordering(631) 00:13:32.091 fused_ordering(632) 00:13:32.091 fused_ordering(633) 00:13:32.091 fused_ordering(634) 00:13:32.091 fused_ordering(635) 00:13:32.091 fused_ordering(636) 00:13:32.091 fused_ordering(637) 00:13:32.091 fused_ordering(638) 00:13:32.091 fused_ordering(639) 00:13:32.091 fused_ordering(640) 00:13:32.091 fused_ordering(641) 00:13:32.091 fused_ordering(642) 00:13:32.091 fused_ordering(643) 00:13:32.091 fused_ordering(644) 00:13:32.091 fused_ordering(645) 00:13:32.091 fused_ordering(646) 00:13:32.091 fused_ordering(647) 00:13:32.091 fused_ordering(648) 00:13:32.091 fused_ordering(649) 00:13:32.091 fused_ordering(650) 00:13:32.091 fused_ordering(651) 00:13:32.091 fused_ordering(652) 00:13:32.091 fused_ordering(653) 00:13:32.091 fused_ordering(654) 00:13:32.091 fused_ordering(655) 00:13:32.091 fused_ordering(656) 00:13:32.091 fused_ordering(657) 00:13:32.091 fused_ordering(658) 00:13:32.091 fused_ordering(659) 00:13:32.091 fused_ordering(660) 00:13:32.091 fused_ordering(661) 00:13:32.091 fused_ordering(662) 00:13:32.091 fused_ordering(663) 00:13:32.091 fused_ordering(664) 00:13:32.091 fused_ordering(665) 00:13:32.091 fused_ordering(666) 00:13:32.091 fused_ordering(667) 00:13:32.091 fused_ordering(668) 00:13:32.091 fused_ordering(669) 00:13:32.091 fused_ordering(670) 00:13:32.091 fused_ordering(671) 00:13:32.091 fused_ordering(672) 00:13:32.091 fused_ordering(673) 00:13:32.091 fused_ordering(674) 00:13:32.091 fused_ordering(675) 00:13:32.091 fused_ordering(676) 00:13:32.091 fused_ordering(677) 00:13:32.091 fused_ordering(678) 00:13:32.091 fused_ordering(679) 00:13:32.091 fused_ordering(680) 00:13:32.091 fused_ordering(681) 00:13:32.091 fused_ordering(682) 00:13:32.091 fused_ordering(683) 00:13:32.091 fused_ordering(684) 00:13:32.091 fused_ordering(685) 00:13:32.091 fused_ordering(686) 00:13:32.091 fused_ordering(687) 00:13:32.091 fused_ordering(688) 00:13:32.091 fused_ordering(689) 00:13:32.091 fused_ordering(690) 00:13:32.091 fused_ordering(691) 00:13:32.091 fused_ordering(692) 00:13:32.091 fused_ordering(693) 00:13:32.091 fused_ordering(694) 00:13:32.091 fused_ordering(695) 00:13:32.091 fused_ordering(696) 00:13:32.091 fused_ordering(697) 00:13:32.091 fused_ordering(698) 00:13:32.091 fused_ordering(699) 00:13:32.091 fused_ordering(700) 00:13:32.091 fused_ordering(701) 00:13:32.091 fused_ordering(702) 00:13:32.091 fused_ordering(703) 00:13:32.091 fused_ordering(704) 00:13:32.091 fused_ordering(705) 00:13:32.091 fused_ordering(706) 00:13:32.091 fused_ordering(707) 00:13:32.091 fused_ordering(708) 00:13:32.091 fused_ordering(709) 00:13:32.091 fused_ordering(710) 00:13:32.091 fused_ordering(711) 00:13:32.091 fused_ordering(712) 00:13:32.091 fused_ordering(713) 00:13:32.091 fused_ordering(714) 00:13:32.091 fused_ordering(715) 00:13:32.091 fused_ordering(716) 00:13:32.091 fused_ordering(717) 00:13:32.091 fused_ordering(718) 00:13:32.091 fused_ordering(719) 00:13:32.091 fused_ordering(720) 00:13:32.091 fused_ordering(721) 00:13:32.091 fused_ordering(722) 00:13:32.091 fused_ordering(723) 00:13:32.091 fused_ordering(724) 00:13:32.091 fused_ordering(725) 00:13:32.091 fused_ordering(726) 00:13:32.091 fused_ordering(727) 00:13:32.091 fused_ordering(728) 00:13:32.091 fused_ordering(729) 00:13:32.092 fused_ordering(730) 00:13:32.092 fused_ordering(731) 00:13:32.092 fused_ordering(732) 00:13:32.092 fused_ordering(733) 00:13:32.092 fused_ordering(734) 00:13:32.092 fused_ordering(735) 00:13:32.092 fused_ordering(736) 00:13:32.092 fused_ordering(737) 00:13:32.092 fused_ordering(738) 00:13:32.092 fused_ordering(739) 00:13:32.092 fused_ordering(740) 00:13:32.092 fused_ordering(741) 00:13:32.092 fused_ordering(742) 00:13:32.092 fused_ordering(743) 00:13:32.092 fused_ordering(744) 00:13:32.092 fused_ordering(745) 00:13:32.092 fused_ordering(746) 00:13:32.092 fused_ordering(747) 00:13:32.092 fused_ordering(748) 00:13:32.092 fused_ordering(749) 00:13:32.092 fused_ordering(750) 00:13:32.092 fused_ordering(751) 00:13:32.092 fused_ordering(752) 00:13:32.092 fused_ordering(753) 00:13:32.092 fused_ordering(754) 00:13:32.092 fused_ordering(755) 00:13:32.092 fused_ordering(756) 00:13:32.092 fused_ordering(757) 00:13:32.092 fused_ordering(758) 00:13:32.092 fused_ordering(759) 00:13:32.092 fused_ordering(760) 00:13:32.092 fused_ordering(761) 00:13:32.092 fused_ordering(762) 00:13:32.092 fused_ordering(763) 00:13:32.092 fused_ordering(764) 00:13:32.092 fused_ordering(765) 00:13:32.092 fused_ordering(766) 00:13:32.092 fused_ordering(767) 00:13:32.092 fused_ordering(768) 00:13:32.092 fused_ordering(769) 00:13:32.092 fused_ordering(770) 00:13:32.092 fused_ordering(771) 00:13:32.092 fused_ordering(772) 00:13:32.092 fused_ordering(773) 00:13:32.092 fused_ordering(774) 00:13:32.092 fused_ordering(775) 00:13:32.092 fused_ordering(776) 00:13:32.092 fused_ordering(777) 00:13:32.092 fused_ordering(778) 00:13:32.092 fused_ordering(779) 00:13:32.092 fused_ordering(780) 00:13:32.092 fused_ordering(781) 00:13:32.092 fused_ordering(782) 00:13:32.092 fused_ordering(783) 00:13:32.092 fused_ordering(784) 00:13:32.092 fused_ordering(785) 00:13:32.092 fused_ordering(786) 00:13:32.092 fused_ordering(787) 00:13:32.092 fused_ordering(788) 00:13:32.092 fused_ordering(789) 00:13:32.092 fused_ordering(790) 00:13:32.092 fused_ordering(791) 00:13:32.092 fused_ordering(792) 00:13:32.092 fused_ordering(793) 00:13:32.092 fused_ordering(794) 00:13:32.092 fused_ordering(795) 00:13:32.092 fused_ordering(796) 00:13:32.092 fused_ordering(797) 00:13:32.092 fused_ordering(798) 00:13:32.092 fused_ordering(799) 00:13:32.092 fused_ordering(800) 00:13:32.092 fused_ordering(801) 00:13:32.092 fused_ordering(802) 00:13:32.092 fused_ordering(803) 00:13:32.092 fused_ordering(804) 00:13:32.092 fused_ordering(805) 00:13:32.092 fused_ordering(806) 00:13:32.092 fused_ordering(807) 00:13:32.092 fused_ordering(808) 00:13:32.092 fused_ordering(809) 00:13:32.092 fused_ordering(810) 00:13:32.092 fused_ordering(811) 00:13:32.092 fused_ordering(812) 00:13:32.092 fused_ordering(813) 00:13:32.092 fused_ordering(814) 00:13:32.092 fused_ordering(815) 00:13:32.092 fused_ordering(816) 00:13:32.092 fused_ordering(817) 00:13:32.092 fused_ordering(818) 00:13:32.092 fused_ordering(819) 00:13:32.092 fused_ordering(820) 00:13:32.658 fused_ordering(821) 00:13:32.658 fused_ordering(822) 00:13:32.658 fused_ordering(823) 00:13:32.658 fused_ordering(824) 00:13:32.658 fused_ordering(825) 00:13:32.658 fused_ordering(826) 00:13:32.658 fused_ordering(827) 00:13:32.658 fused_ordering(828) 00:13:32.658 fused_ordering(829) 00:13:32.658 fused_ordering(830) 00:13:32.658 fused_ordering(831) 00:13:32.658 fused_ordering(832) 00:13:32.658 fused_ordering(833) 00:13:32.658 fused_ordering(834) 00:13:32.658 fused_ordering(835) 00:13:32.658 fused_ordering(836) 00:13:32.658 fused_ordering(837) 00:13:32.658 fused_ordering(838) 00:13:32.658 fused_ordering(839) 00:13:32.658 fused_ordering(840) 00:13:32.659 fused_ordering(841) 00:13:32.659 fused_ordering(842) 00:13:32.659 fused_ordering(843) 00:13:32.659 fused_ordering(844) 00:13:32.659 fused_ordering(845) 00:13:32.659 fused_ordering(846) 00:13:32.659 fused_ordering(847) 00:13:32.659 fused_ordering(848) 00:13:32.659 fused_ordering(849) 00:13:32.659 fused_ordering(850) 00:13:32.659 fused_ordering(851) 00:13:32.659 fused_ordering(852) 00:13:32.659 fused_ordering(853) 00:13:32.659 fused_ordering(854) 00:13:32.659 fused_ordering(855) 00:13:32.659 fused_ordering(856) 00:13:32.659 fused_ordering(857) 00:13:32.659 fused_ordering(858) 00:13:32.659 fused_ordering(859) 00:13:32.659 fused_ordering(860) 00:13:32.659 fused_ordering(861) 00:13:32.659 fused_ordering(862) 00:13:32.659 fused_ordering(863) 00:13:32.659 fused_ordering(864) 00:13:32.659 fused_ordering(865) 00:13:32.659 fused_ordering(866) 00:13:32.659 fused_ordering(867) 00:13:32.659 fused_ordering(868) 00:13:32.659 fused_ordering(869) 00:13:32.659 fused_ordering(870) 00:13:32.659 fused_ordering(871) 00:13:32.659 fused_ordering(872) 00:13:32.659 fused_ordering(873) 00:13:32.659 fused_ordering(874) 00:13:32.659 fused_ordering(875) 00:13:32.659 fused_ordering(876) 00:13:32.659 fused_ordering(877) 00:13:32.659 fused_ordering(878) 00:13:32.659 fused_ordering(879) 00:13:32.659 fused_ordering(880) 00:13:32.659 fused_ordering(881) 00:13:32.659 fused_ordering(882) 00:13:32.659 fused_ordering(883) 00:13:32.659 fused_ordering(884) 00:13:32.659 fused_ordering(885) 00:13:32.659 fused_ordering(886) 00:13:32.659 fused_ordering(887) 00:13:32.659 fused_ordering(888) 00:13:32.659 fused_ordering(889) 00:13:32.659 fused_ordering(890) 00:13:32.659 fused_ordering(891) 00:13:32.659 fused_ordering(892) 00:13:32.659 fused_ordering(893) 00:13:32.659 fused_ordering(894) 00:13:32.659 fused_ordering(895) 00:13:32.659 fused_ordering(896) 00:13:32.659 fused_ordering(897) 00:13:32.659 fused_ordering(898) 00:13:32.659 fused_ordering(899) 00:13:32.659 fused_ordering(900) 00:13:32.659 fused_ordering(901) 00:13:32.659 fused_ordering(902) 00:13:32.659 fused_ordering(903) 00:13:32.659 fused_ordering(904) 00:13:32.659 fused_ordering(905) 00:13:32.659 fused_ordering(906) 00:13:32.659 fused_ordering(907) 00:13:32.659 fused_ordering(908) 00:13:32.659 fused_ordering(909) 00:13:32.659 fused_ordering(910) 00:13:32.659 fused_ordering(911) 00:13:32.659 fused_ordering(912) 00:13:32.659 fused_ordering(913) 00:13:32.659 fused_ordering(914) 00:13:32.659 fused_ordering(915) 00:13:32.659 fused_ordering(916) 00:13:32.659 fused_ordering(917) 00:13:32.659 fused_ordering(918) 00:13:32.659 fused_ordering(919) 00:13:32.659 fused_ordering(920) 00:13:32.659 fused_ordering(921) 00:13:32.659 fused_ordering(922) 00:13:32.659 fused_ordering(923) 00:13:32.659 fused_ordering(924) 00:13:32.659 fused_ordering(925) 00:13:32.659 fused_ordering(926) 00:13:32.659 fused_ordering(927) 00:13:32.659 fused_ordering(928) 00:13:32.659 fused_ordering(929) 00:13:32.659 fused_ordering(930) 00:13:32.659 fused_ordering(931) 00:13:32.659 fused_ordering(932) 00:13:32.659 fused_ordering(933) 00:13:32.659 fused_ordering(934) 00:13:32.659 fused_ordering(935) 00:13:32.659 fused_ordering(936) 00:13:32.659 fused_ordering(937) 00:13:32.659 fused_ordering(938) 00:13:32.659 fused_ordering(939) 00:13:32.659 fused_ordering(940) 00:13:32.659 fused_ordering(941) 00:13:32.659 fused_ordering(942) 00:13:32.659 fused_ordering(943) 00:13:32.659 fused_ordering(944) 00:13:32.659 fused_ordering(945) 00:13:32.659 fused_ordering(946) 00:13:32.659 fused_ordering(947) 00:13:32.659 fused_ordering(948) 00:13:32.659 fused_ordering(949) 00:13:32.659 fused_ordering(950) 00:13:32.659 fused_ordering(951) 00:13:32.659 fused_ordering(952) 00:13:32.659 fused_ordering(953) 00:13:32.659 fused_ordering(954) 00:13:32.659 fused_ordering(955) 00:13:32.659 fused_ordering(956) 00:13:32.659 fused_ordering(957) 00:13:32.659 fused_ordering(958) 00:13:32.659 fused_ordering(959) 00:13:32.659 fused_ordering(960) 00:13:32.659 fused_ordering(961) 00:13:32.659 fused_ordering(962) 00:13:32.659 fused_ordering(963) 00:13:32.659 fused_ordering(964) 00:13:32.659 fused_ordering(965) 00:13:32.659 fused_ordering(966) 00:13:32.659 fused_ordering(967) 00:13:32.659 fused_ordering(968) 00:13:32.659 fused_ordering(969) 00:13:32.659 fused_ordering(970) 00:13:32.659 fused_ordering(971) 00:13:32.659 fused_ordering(972) 00:13:32.659 fused_ordering(973) 00:13:32.659 fused_ordering(974) 00:13:32.659 fused_ordering(975) 00:13:32.659 fused_ordering(976) 00:13:32.659 fused_ordering(977) 00:13:32.659 fused_ordering(978) 00:13:32.659 fused_ordering(979) 00:13:32.659 fused_ordering(980) 00:13:32.659 fused_ordering(981) 00:13:32.659 fused_ordering(982) 00:13:32.659 fused_ordering(983) 00:13:32.659 fused_ordering(984) 00:13:32.659 fused_ordering(985) 00:13:32.659 fused_ordering(986) 00:13:32.659 fused_ordering(987) 00:13:32.659 fused_ordering(988) 00:13:32.659 fused_ordering(989) 00:13:32.659 fused_ordering(990) 00:13:32.659 fused_ordering(991) 00:13:32.659 fused_ordering(992) 00:13:32.659 fused_ordering(993) 00:13:32.659 fused_ordering(994) 00:13:32.659 fused_ordering(995) 00:13:32.659 fused_ordering(996) 00:13:32.659 fused_ordering(997) 00:13:32.659 fused_ordering(998) 00:13:32.659 fused_ordering(999) 00:13:32.659 fused_ordering(1000) 00:13:32.659 fused_ordering(1001) 00:13:32.659 fused_ordering(1002) 00:13:32.659 fused_ordering(1003) 00:13:32.659 fused_ordering(1004) 00:13:32.659 fused_ordering(1005) 00:13:32.659 fused_ordering(1006) 00:13:32.659 fused_ordering(1007) 00:13:32.659 fused_ordering(1008) 00:13:32.659 fused_ordering(1009) 00:13:32.659 fused_ordering(1010) 00:13:32.659 fused_ordering(1011) 00:13:32.659 fused_ordering(1012) 00:13:32.659 fused_ordering(1013) 00:13:32.659 fused_ordering(1014) 00:13:32.659 fused_ordering(1015) 00:13:32.659 fused_ordering(1016) 00:13:32.659 fused_ordering(1017) 00:13:32.659 fused_ordering(1018) 00:13:32.659 fused_ordering(1019) 00:13:32.659 fused_ordering(1020) 00:13:32.659 fused_ordering(1021) 00:13:32.659 fused_ordering(1022) 00:13:32.659 fused_ordering(1023) 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.659 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.659 rmmod nvme_tcp 00:13:32.659 rmmod nvme_fabrics 00:13:32.918 rmmod nvme_keyring 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3998476 ']' 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3998476 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 3998476 ']' 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 3998476 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3998476 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3998476' 00:13:32.918 killing process with pid 3998476 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 3998476 00:13:32.918 [2024-05-15 01:41:56.647037] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:32.918 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 3998476 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.177 01:41:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.081 01:41:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.081 00:13:35.081 real 0m7.988s 00:13:35.081 user 0m5.378s 00:13:35.081 sys 0m3.469s 00:13:35.081 01:41:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:35.081 01:41:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:35.081 ************************************ 00:13:35.081 END TEST nvmf_fused_ordering 00:13:35.081 ************************************ 00:13:35.081 01:41:58 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:35.081 01:41:58 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:35.081 01:41:58 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:35.081 01:41:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.081 ************************************ 00:13:35.081 START TEST nvmf_delete_subsystem 00:13:35.081 ************************************ 00:13:35.081 01:41:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:35.081 * Looking for test storage... 00:13:35.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:35.340 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.341 01:41:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:37.871 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.871 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.871 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.871 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.871 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.871 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:37.872 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:37.872 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:37.872 Found net devices under 0000:09:00.0: cvl_0_0 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:37.872 Found net devices under 0000:09:00.1: cvl_0_1 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:37.872 00:13:37.872 --- 10.0.0.2 ping statistics --- 00:13:37.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.872 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:13:37.872 00:13:37.872 --- 10.0.0.1 ping statistics --- 00:13:37.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.872 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4001191 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4001191 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 4001191 ']' 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.872 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:37.873 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:37.873 [2024-05-15 01:42:01.609293] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:13:37.873 [2024-05-15 01:42:01.609381] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.873 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.873 [2024-05-15 01:42:01.684105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:37.873 [2024-05-15 01:42:01.765939] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.873 [2024-05-15 01:42:01.766004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.873 [2024-05-15 01:42:01.766018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.873 [2024-05-15 01:42:01.766029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.873 [2024-05-15 01:42:01.766055] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.873 [2024-05-15 01:42:01.766148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.873 [2024-05-15 01:42:01.766153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 [2024-05-15 01:42:01.903418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 [2024-05-15 01:42:01.919426] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:38.131 [2024-05-15 01:42:01.919761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 NULL1 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 Delay0 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4001238 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:38.131 01:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:38.131 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.131 [2024-05-15 01:42:01.994318] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:40.024 01:42:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.024 01:42:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:40.024 01:42:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 [2024-05-15 01:42:04.116633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16180 is same with the state(5) to be set 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 starting I/O failed: -6 00:13:40.282 [2024-05-15 01:42:04.117581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffb50000c00 is same with the state(5) to be set 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Write completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.282 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 Write completed with error (sct=0, sc=8) 00:13:40.283 Read completed with error (sct=0, sc=8) 00:13:40.283 [2024-05-15 01:42:04.118064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16790 is same with the state(5) to be set 00:13:41.216 [2024-05-15 01:42:05.091793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198b0 is same with the state(5) to be set 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 [2024-05-15 01:42:05.120619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffb5000bfe0 is same with the state(5) to be set 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 [2024-05-15 01:42:05.120761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffb5000c600 is same with the state(5) to be set 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 [2024-05-15 01:42:05.121145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16360 is same with the state(5) to be set 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Write completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 Read completed with error (sct=0, sc=8) 00:13:41.216 [2024-05-15 01:42:05.121313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16aa0 is same with the state(5) to be set 00:13:41.216 Initializing NVMe Controllers 00:13:41.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.216 Controller IO queue size 128, less than required. 00:13:41.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:41.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:41.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:41.216 Initialization complete. Launching workers. 00:13:41.216 ======================================================== 00:13:41.216 Latency(us) 00:13:41.216 Device Information : IOPS MiB/s Average min max 00:13:41.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.74 0.08 911539.05 1466.33 1046060.21 00:13:41.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.29 0.08 943303.84 478.10 2005477.41 00:13:41.216 ======================================================== 00:13:41.216 Total : 320.03 0.16 927052.09 478.10 2005477.41 00:13:41.216 00:13:41.216 [2024-05-15 01:42:05.122168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa198b0 (9): Bad file descriptor 00:13:41.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:41.216 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.216 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:41.216 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4001238 00:13:41.216 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:41.779 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:41.779 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4001238 00:13:41.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4001238) - No such process 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4001238 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 4001238 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 4001238 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.780 [2024-05-15 01:42:05.640411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4001651 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:41.780 01:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.780 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.780 [2024-05-15 01:42:05.696299] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:42.342 01:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.342 01:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:42.342 01:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.949 01:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.949 01:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:42.949 01:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.509 01:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.509 01:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:43.509 01:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.765 01:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.765 01:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:43.765 01:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.332 01:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.332 01:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:44.332 01:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.898 01:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:44.898 01:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:44.898 01:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:44.898 Initializing NVMe Controllers 00:13:44.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.898 Controller IO queue size 128, less than required. 00:13:44.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:44.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:44.898 Initialization complete. Launching workers. 00:13:44.898 ======================================================== 00:13:44.898 Latency(us) 00:13:44.898 Device Information : IOPS MiB/s Average min max 00:13:44.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003232.59 1000205.46 1010514.96 00:13:44.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006318.00 1000210.64 1042178.34 00:13:44.898 ======================================================== 00:13:44.898 Total : 256.00 0.12 1004775.30 1000205.46 1042178.34 00:13:44.898 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4001651 00:13:45.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4001651) - No such process 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4001651 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.463 rmmod nvme_tcp 00:13:45.463 rmmod nvme_fabrics 00:13:45.463 rmmod nvme_keyring 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4001191 ']' 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4001191 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 4001191 ']' 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 4001191 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4001191 00:13:45.463 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:45.464 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:45.464 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4001191' 00:13:45.464 killing process with pid 4001191 00:13:45.464 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 4001191 00:13:45.464 [2024-05-15 01:42:09.258440] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:45.464 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 4001191 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.722 01:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.625 01:42:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:47.625 00:13:47.625 real 0m12.548s 00:13:47.625 user 0m27.722s 00:13:47.625 sys 0m3.158s 00:13:47.625 01:42:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:47.625 01:42:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.625 ************************************ 00:13:47.625 END TEST nvmf_delete_subsystem 00:13:47.625 ************************************ 00:13:47.625 01:42:11 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:47.625 01:42:11 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:47.625 01:42:11 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:47.625 01:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:47.884 ************************************ 00:13:47.884 START TEST nvmf_ns_masking 00:13:47.884 ************************************ 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:47.884 * Looking for test storage... 00:13:47.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=7f269dae-2fa5-4cea-aec0-e947356e1763 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.884 01:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:50.412 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:50.412 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:50.412 Found net devices under 0000:09:00.0: cvl_0_0 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:50.412 Found net devices under 0000:09:00.1: cvl_0_1 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.412 01:42:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:13:50.412 00:13:50.412 --- 10.0.0.2 ping statistics --- 00:13:50.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.412 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:50.412 00:13:50.412 --- 10.0.0.1 ping statistics --- 00:13:50.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.412 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:50.412 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4004904 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4004904 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 4004904 ']' 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:50.413 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 [2024-05-15 01:42:14.135915] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:13:50.413 [2024-05-15 01:42:14.135997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.413 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.413 [2024-05-15 01:42:14.212015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:50.413 [2024-05-15 01:42:14.299319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.413 [2024-05-15 01:42:14.299382] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.413 [2024-05-15 01:42:14.299411] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.413 [2024-05-15 01:42:14.299422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.413 [2024-05-15 01:42:14.299432] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.413 [2024-05-15 01:42:14.299497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.413 [2024-05-15 01:42:14.299555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.413 [2024-05-15 01:42:14.299621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.413 [2024-05-15 01:42:14.299623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.670 01:42:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:50.927 [2024-05-15 01:42:14.692686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.927 01:42:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:13:50.927 01:42:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:13:50.927 01:42:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:51.184 Malloc1 00:13:51.184 01:42:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:51.442 Malloc2 00:13:51.442 01:42:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:51.698 01:42:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:51.954 01:42:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.212 [2024-05-15 01:42:15.927529] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:52.212 [2024-05-15 01:42:15.927841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.212 01:42:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:13:52.212 01:42:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7f269dae-2fa5-4cea-aec0-e947356e1763 -a 10.0.0.2 -s 4420 -i 4 00:13:52.212 01:42:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.212 01:42:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:13:52.212 01:42:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.212 01:42:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:52.212 01:42:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.737 [ 0]:0x1 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0961e923144d4dbab693c312d03dd628 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0961e923144d4dbab693c312d03dd628 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:54.737 [ 0]:0x1 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0961e923144d4dbab693c312d03dd628 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0961e923144d4dbab693c312d03dd628 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:54.737 [ 1]:0x2 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:13:54.737 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.995 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.253 01:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7f269dae-2fa5-4cea-aec0-e947356e1763 -a 10.0.0.2 -s 4420 -i 4 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:13:55.510 01:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.034 [ 0]:0x2 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.034 [ 0]:0x1 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0961e923144d4dbab693c312d03dd628 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0961e923144d4dbab693c312d03dd628 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.034 [ 1]:0x2 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.034 01:42:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:13:58.292 [ 0]:0x2 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:13:58.292 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.550 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7f269dae-2fa5-4cea-aec0-e947356e1763 -a 10.0.0.2 -s 4420 -i 4 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:13:58.808 01:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:01.333 [ 0]:0x1 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=0961e923144d4dbab693c312d03dd628 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 0961e923144d4dbab693c312d03dd628 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:01.333 [ 1]:0x2 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.333 01:42:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:01.333 [ 0]:0x2 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.333 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.334 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:01.334 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.592 [2024-05-15 01:42:25.405967] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:01.592 request: 00:14:01.592 { 00:14:01.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.592 "nsid": 2, 00:14:01.592 "host": "nqn.2016-06.io.spdk:host1", 00:14:01.592 "method": "nvmf_ns_remove_host", 00:14:01.592 "req_id": 1 00:14:01.592 } 00:14:01.592 Got JSON-RPC error response 00:14:01.592 response: 00:14:01.592 { 00:14:01.592 "code": -32602, 00:14:01.592 "message": "Invalid parameters" 00:14:01.592 } 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:01.592 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:01.850 [ 0]:0x2 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c50ac625fe4b425bbeb37ca591e3d203 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c50ac625fe4b425bbeb37ca591e3d203 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.850 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.108 rmmod nvme_tcp 00:14:02.108 rmmod nvme_fabrics 00:14:02.108 rmmod nvme_keyring 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4004904 ']' 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4004904 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 4004904 ']' 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 4004904 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:02.108 01:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4004904 00:14:02.108 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:02.108 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:02.108 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4004904' 00:14:02.108 killing process with pid 4004904 00:14:02.108 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 4004904 00:14:02.108 [2024-05-15 01:42:26.006414] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:02.108 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 4004904 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.367 01:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.952 01:42:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.952 00:14:04.952 real 0m16.749s 00:14:04.952 user 0m51.186s 00:14:04.952 sys 0m3.931s 00:14:04.952 01:42:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:04.952 01:42:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:04.952 ************************************ 00:14:04.952 END TEST nvmf_ns_masking 00:14:04.952 ************************************ 00:14:04.952 01:42:28 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:04.952 01:42:28 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:04.952 01:42:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:04.952 01:42:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:04.952 01:42:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.952 ************************************ 00:14:04.952 START TEST nvmf_nvme_cli 00:14:04.952 ************************************ 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:04.952 * Looking for test storage... 00:14:04.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.952 01:42:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:07.482 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.482 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:07.483 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:07.483 Found net devices under 0000:09:00.0: cvl_0_0 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:07.483 Found net devices under 0000:09:00.1: cvl_0_1 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:07.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:14:07.483 00:14:07.483 --- 10.0.0.2 ping statistics --- 00:14:07.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.483 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:07.483 01:42:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:14:07.483 00:14:07.483 --- 10.0.0.1 ping statistics --- 00:14:07.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.483 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4008743 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4008743 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 4008743 ']' 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 [2024-05-15 01:42:31.066500] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:07.483 [2024-05-15 01:42:31.066599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.483 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.483 [2024-05-15 01:42:31.142318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.483 [2024-05-15 01:42:31.228387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.483 [2024-05-15 01:42:31.228449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.483 [2024-05-15 01:42:31.228477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.483 [2024-05-15 01:42:31.228489] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.483 [2024-05-15 01:42:31.228500] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.483 [2024-05-15 01:42:31.228551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.483 [2024-05-15 01:42:31.228673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.483 [2024-05-15 01:42:31.228724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.483 [2024-05-15 01:42:31.228727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 [2024-05-15 01:42:31.383982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.483 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 Malloc0 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 Malloc1 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.741 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 [2024-05-15 01:42:31.469732] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:07.741 [2024-05-15 01:42:31.470060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:07.742 00:14:07.742 Discovery Log Number of Records 2, Generation counter 2 00:14:07.742 =====Discovery Log Entry 0====== 00:14:07.742 trtype: tcp 00:14:07.742 adrfam: ipv4 00:14:07.742 subtype: current discovery subsystem 00:14:07.742 treq: not required 00:14:07.742 portid: 0 00:14:07.742 trsvcid: 4420 00:14:07.742 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:07.742 traddr: 10.0.0.2 00:14:07.742 eflags: explicit discovery connections, duplicate discovery information 00:14:07.742 sectype: none 00:14:07.742 =====Discovery Log Entry 1====== 00:14:07.742 trtype: tcp 00:14:07.742 adrfam: ipv4 00:14:07.742 subtype: nvme subsystem 00:14:07.742 treq: not required 00:14:07.742 portid: 0 00:14:07.742 trsvcid: 4420 00:14:07.742 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:07.742 traddr: 10.0.0.2 00:14:07.742 eflags: none 00:14:07.742 sectype: none 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:07.742 01:42:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.307 01:42:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:08.307 01:42:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:14:08.307 01:42:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.307 01:42:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:14:08.307 01:42:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:14:08.307 01:42:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:10.832 /dev/nvme0n1 ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.832 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.832 rmmod nvme_tcp 00:14:11.090 rmmod nvme_fabrics 00:14:11.090 rmmod nvme_keyring 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4008743 ']' 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4008743 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 4008743 ']' 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 4008743 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4008743 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4008743' 00:14:11.090 killing process with pid 4008743 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 4008743 00:14:11.090 [2024-05-15 01:42:34.826515] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:11.090 01:42:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 4008743 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.349 01:42:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.253 01:42:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.253 00:14:13.253 real 0m8.779s 00:14:13.253 user 0m15.990s 00:14:13.253 sys 0m2.469s 00:14:13.253 01:42:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:13.253 01:42:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.253 ************************************ 00:14:13.253 END TEST nvmf_nvme_cli 00:14:13.253 ************************************ 00:14:13.253 01:42:37 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:13.253 01:42:37 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:13.253 01:42:37 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:13.253 01:42:37 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:13.253 01:42:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.512 ************************************ 00:14:13.512 START TEST nvmf_vfio_user 00:14:13.512 ************************************ 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:13.512 * Looking for test storage... 00:14:13.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.512 01:42:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4009551 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4009551' 00:14:13.513 Process pid: 4009551 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4009551 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 4009551 ']' 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:13.513 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:13.513 [2024-05-15 01:42:37.330858] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:13.513 [2024-05-15 01:42:37.330934] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.513 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.513 [2024-05-15 01:42:37.397592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.771 [2024-05-15 01:42:37.480492] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.771 [2024-05-15 01:42:37.480576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.771 [2024-05-15 01:42:37.480598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.771 [2024-05-15 01:42:37.480617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.771 [2024-05-15 01:42:37.480632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.771 [2024-05-15 01:42:37.480717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.771 [2024-05-15 01:42:37.480777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.771 [2024-05-15 01:42:37.480842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.771 [2024-05-15 01:42:37.480847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.771 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:13.771 01:42:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:14:13.771 01:42:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:14.703 01:42:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:15.268 01:42:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:15.268 01:42:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:15.268 01:42:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:15.268 01:42:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:15.268 01:42:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.268 Malloc1 00:14:15.268 01:42:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:15.835 01:42:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:15.835 01:42:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:16.094 [2024-05-15 01:42:39.919422] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:16.094 01:42:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:16.094 01:42:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:16.094 01:42:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:16.352 Malloc2 00:14:16.352 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:16.610 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:16.868 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:17.126 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:17.126 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:17.126 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.126 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:17.126 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:17.126 01:42:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:17.126 [2024-05-15 01:42:40.993804] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:17.126 [2024-05-15 01:42:40.993843] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009971 ] 00:14:17.126 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.126 [2024-05-15 01:42:41.026339] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:17.126 [2024-05-15 01:42:41.036067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:17.126 [2024-05-15 01:42:41.036096] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f26544e4000 00:14:17.126 [2024-05-15 01:42:41.037061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.038063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.039067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.040072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.041076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.044226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.045095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.046100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:17.126 [2024-05-15 01:42:41.047109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:17.126 [2024-05-15 01:42:41.047130] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f265329a000 00:14:17.126 [2024-05-15 01:42:41.048271] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:17.386 [2024-05-15 01:42:41.063993] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:17.386 [2024-05-15 01:42:41.064034] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:17.386 [2024-05-15 01:42:41.067228] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:17.386 [2024-05-15 01:42:41.067285] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:17.386 [2024-05-15 01:42:41.067381] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:17.386 [2024-05-15 01:42:41.067409] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:17.386 [2024-05-15 01:42:41.067421] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:17.386 [2024-05-15 01:42:41.068239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:17.386 [2024-05-15 01:42:41.068274] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:17.386 [2024-05-15 01:42:41.068288] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:17.386 [2024-05-15 01:42:41.069247] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:17.386 [2024-05-15 01:42:41.069266] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:17.386 [2024-05-15 01:42:41.069279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:17.386 [2024-05-15 01:42:41.070259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:17.386 [2024-05-15 01:42:41.070278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:17.386 [2024-05-15 01:42:41.071258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:17.386 [2024-05-15 01:42:41.071277] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:17.386 [2024-05-15 01:42:41.071287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:17.386 [2024-05-15 01:42:41.071298] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:17.386 [2024-05-15 01:42:41.071409] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:17.386 [2024-05-15 01:42:41.071417] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:17.386 [2024-05-15 01:42:41.071426] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:17.386 [2024-05-15 01:42:41.072267] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:17.386 [2024-05-15 01:42:41.073271] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:17.386 [2024-05-15 01:42:41.074276] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:17.386 [2024-05-15 01:42:41.075270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:17.386 [2024-05-15 01:42:41.075368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:17.386 [2024-05-15 01:42:41.076287] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:17.386 [2024-05-15 01:42:41.076310] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:17.386 [2024-05-15 01:42:41.076320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:17.386 [2024-05-15 01:42:41.076344] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:17.386 [2024-05-15 01:42:41.076368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:17.386 [2024-05-15 01:42:41.076396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:17.386 [2024-05-15 01:42:41.076405] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:17.386 [2024-05-15 01:42:41.076426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:17.386 [2024-05-15 01:42:41.076482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:17.386 [2024-05-15 01:42:41.076499] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:17.386 [2024-05-15 01:42:41.076518] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:17.386 [2024-05-15 01:42:41.076525] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:17.386 [2024-05-15 01:42:41.076532] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:17.386 [2024-05-15 01:42:41.076542] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:17.386 [2024-05-15 01:42:41.076549] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:17.386 [2024-05-15 01:42:41.076557] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:17.386 [2024-05-15 01:42:41.076573] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:17.386 [2024-05-15 01:42:41.076591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:17.386 [2024-05-15 01:42:41.076617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:17.386 [2024-05-15 01:42:41.076640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.387 [2024-05-15 01:42:41.076654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.387 [2024-05-15 01:42:41.076665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.387 [2024-05-15 01:42:41.076676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.387 [2024-05-15 01:42:41.076684] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076695] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.076723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.076734] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:17.387 [2024-05-15 01:42:41.076746] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076757] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076771] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.076797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.076850] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076865] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076879] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:17.387 [2024-05-15 01:42:41.076887] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:17.387 [2024-05-15 01:42:41.076897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.076913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.076935] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:17.387 [2024-05-15 01:42:41.076950] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076963] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.076974] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:17.387 [2024-05-15 01:42:41.076982] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:17.387 [2024-05-15 01:42:41.076991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077025] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077049] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:17.387 [2024-05-15 01:42:41.077056] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:17.387 [2024-05-15 01:42:41.077065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077094] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077106] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077119] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077133] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077142] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077150] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:17.387 [2024-05-15 01:42:41.077158] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:17.387 [2024-05-15 01:42:41.077166] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:17.387 [2024-05-15 01:42:41.077229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077354] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:17.387 [2024-05-15 01:42:41.077363] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:17.387 [2024-05-15 01:42:41.077369] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:17.387 [2024-05-15 01:42:41.077375] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:17.387 [2024-05-15 01:42:41.077385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:17.387 [2024-05-15 01:42:41.077396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:17.387 [2024-05-15 01:42:41.077405] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:17.387 [2024-05-15 01:42:41.077415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077427] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:17.387 [2024-05-15 01:42:41.077435] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:17.387 [2024-05-15 01:42:41.077444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077461] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:17.387 [2024-05-15 01:42:41.077470] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:17.387 [2024-05-15 01:42:41.077479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:17.387 [2024-05-15 01:42:41.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:17.387 [2024-05-15 01:42:41.077560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:17.387 ===================================================== 00:14:17.387 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:17.387 ===================================================== 00:14:17.387 Controller Capabilities/Features 00:14:17.387 ================================ 00:14:17.387 Vendor ID: 4e58 00:14:17.387 Subsystem Vendor ID: 4e58 00:14:17.387 Serial Number: SPDK1 00:14:17.387 Model Number: SPDK bdev Controller 00:14:17.387 Firmware Version: 24.05 00:14:17.387 Recommended Arb Burst: 6 00:14:17.387 IEEE OUI Identifier: 8d 6b 50 00:14:17.387 Multi-path I/O 00:14:17.387 May have multiple subsystem ports: Yes 00:14:17.387 May have multiple controllers: Yes 00:14:17.387 Associated with SR-IOV VF: No 00:14:17.387 Max Data Transfer Size: 131072 00:14:17.387 Max Number of Namespaces: 32 00:14:17.388 Max Number of I/O Queues: 127 00:14:17.388 NVMe Specification Version (VS): 1.3 00:14:17.388 NVMe Specification Version (Identify): 1.3 00:14:17.388 Maximum Queue Entries: 256 00:14:17.388 Contiguous Queues Required: Yes 00:14:17.388 Arbitration Mechanisms Supported 00:14:17.388 Weighted Round Robin: Not Supported 00:14:17.388 Vendor Specific: Not Supported 00:14:17.388 Reset Timeout: 15000 ms 00:14:17.388 Doorbell Stride: 4 bytes 00:14:17.388 NVM Subsystem Reset: Not Supported 00:14:17.388 Command Sets Supported 00:14:17.388 NVM Command Set: Supported 00:14:17.388 Boot Partition: Not Supported 00:14:17.388 Memory Page Size Minimum: 4096 bytes 00:14:17.388 Memory Page Size Maximum: 4096 bytes 00:14:17.388 Persistent Memory Region: Not Supported 00:14:17.388 Optional Asynchronous Events Supported 00:14:17.388 Namespace Attribute Notices: Supported 00:14:17.388 Firmware Activation Notices: Not Supported 00:14:17.388 ANA Change Notices: Not Supported 00:14:17.388 PLE Aggregate Log Change Notices: Not Supported 00:14:17.388 LBA Status Info Alert Notices: Not Supported 00:14:17.388 EGE Aggregate Log Change Notices: Not Supported 00:14:17.388 Normal NVM Subsystem Shutdown event: Not Supported 00:14:17.388 Zone Descriptor Change Notices: Not Supported 00:14:17.388 Discovery Log Change Notices: Not Supported 00:14:17.388 Controller Attributes 00:14:17.388 128-bit Host Identifier: Supported 00:14:17.388 Non-Operational Permissive Mode: Not Supported 00:14:17.388 NVM Sets: Not Supported 00:14:17.388 Read Recovery Levels: Not Supported 00:14:17.388 Endurance Groups: Not Supported 00:14:17.388 Predictable Latency Mode: Not Supported 00:14:17.388 Traffic Based Keep ALive: Not Supported 00:14:17.388 Namespace Granularity: Not Supported 00:14:17.388 SQ Associations: Not Supported 00:14:17.388 UUID List: Not Supported 00:14:17.388 Multi-Domain Subsystem: Not Supported 00:14:17.388 Fixed Capacity Management: Not Supported 00:14:17.388 Variable Capacity Management: Not Supported 00:14:17.388 Delete Endurance Group: Not Supported 00:14:17.388 Delete NVM Set: Not Supported 00:14:17.388 Extended LBA Formats Supported: Not Supported 00:14:17.388 Flexible Data Placement Supported: Not Supported 00:14:17.388 00:14:17.388 Controller Memory Buffer Support 00:14:17.388 ================================ 00:14:17.388 Supported: No 00:14:17.388 00:14:17.388 Persistent Memory Region Support 00:14:17.388 ================================ 00:14:17.388 Supported: No 00:14:17.388 00:14:17.388 Admin Command Set Attributes 00:14:17.388 ============================ 00:14:17.388 Security Send/Receive: Not Supported 00:14:17.388 Format NVM: Not Supported 00:14:17.388 Firmware Activate/Download: Not Supported 00:14:17.388 Namespace Management: Not Supported 00:14:17.388 Device Self-Test: Not Supported 00:14:17.388 Directives: Not Supported 00:14:17.388 NVMe-MI: Not Supported 00:14:17.388 Virtualization Management: Not Supported 00:14:17.388 Doorbell Buffer Config: Not Supported 00:14:17.388 Get LBA Status Capability: Not Supported 00:14:17.388 Command & Feature Lockdown Capability: Not Supported 00:14:17.388 Abort Command Limit: 4 00:14:17.388 Async Event Request Limit: 4 00:14:17.388 Number of Firmware Slots: N/A 00:14:17.388 Firmware Slot 1 Read-Only: N/A 00:14:17.388 Firmware Activation Without Reset: N/A 00:14:17.388 Multiple Update Detection Support: N/A 00:14:17.388 Firmware Update Granularity: No Information Provided 00:14:17.388 Per-Namespace SMART Log: No 00:14:17.388 Asymmetric Namespace Access Log Page: Not Supported 00:14:17.388 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:17.388 Command Effects Log Page: Supported 00:14:17.388 Get Log Page Extended Data: Supported 00:14:17.388 Telemetry Log Pages: Not Supported 00:14:17.388 Persistent Event Log Pages: Not Supported 00:14:17.388 Supported Log Pages Log Page: May Support 00:14:17.388 Commands Supported & Effects Log Page: Not Supported 00:14:17.388 Feature Identifiers & Effects Log Page:May Support 00:14:17.388 NVMe-MI Commands & Effects Log Page: May Support 00:14:17.388 Data Area 4 for Telemetry Log: Not Supported 00:14:17.388 Error Log Page Entries Supported: 128 00:14:17.388 Keep Alive: Supported 00:14:17.388 Keep Alive Granularity: 10000 ms 00:14:17.388 00:14:17.388 NVM Command Set Attributes 00:14:17.388 ========================== 00:14:17.388 Submission Queue Entry Size 00:14:17.388 Max: 64 00:14:17.388 Min: 64 00:14:17.388 Completion Queue Entry Size 00:14:17.388 Max: 16 00:14:17.388 Min: 16 00:14:17.388 Number of Namespaces: 32 00:14:17.388 Compare Command: Supported 00:14:17.388 Write Uncorrectable Command: Not Supported 00:14:17.388 Dataset Management Command: Supported 00:14:17.388 Write Zeroes Command: Supported 00:14:17.388 Set Features Save Field: Not Supported 00:14:17.388 Reservations: Not Supported 00:14:17.388 Timestamp: Not Supported 00:14:17.388 Copy: Supported 00:14:17.388 Volatile Write Cache: Present 00:14:17.388 Atomic Write Unit (Normal): 1 00:14:17.388 Atomic Write Unit (PFail): 1 00:14:17.388 Atomic Compare & Write Unit: 1 00:14:17.388 Fused Compare & Write: Supported 00:14:17.388 Scatter-Gather List 00:14:17.388 SGL Command Set: Supported (Dword aligned) 00:14:17.388 SGL Keyed: Not Supported 00:14:17.388 SGL Bit Bucket Descriptor: Not Supported 00:14:17.388 SGL Metadata Pointer: Not Supported 00:14:17.388 Oversized SGL: Not Supported 00:14:17.388 SGL Metadata Address: Not Supported 00:14:17.388 SGL Offset: Not Supported 00:14:17.388 Transport SGL Data Block: Not Supported 00:14:17.388 Replay Protected Memory Block: Not Supported 00:14:17.388 00:14:17.388 Firmware Slot Information 00:14:17.388 ========================= 00:14:17.388 Active slot: 1 00:14:17.388 Slot 1 Firmware Revision: 24.05 00:14:17.388 00:14:17.388 00:14:17.388 Commands Supported and Effects 00:14:17.388 ============================== 00:14:17.388 Admin Commands 00:14:17.389 -------------- 00:14:17.389 Get Log Page (02h): Supported 00:14:17.389 Identify (06h): Supported 00:14:17.389 Abort (08h): Supported 00:14:17.389 Set Features (09h): Supported 00:14:17.389 Get Features (0Ah): Supported 00:14:17.389 Asynchronous Event Request (0Ch): Supported 00:14:17.389 Keep Alive (18h): Supported 00:14:17.389 I/O Commands 00:14:17.389 ------------ 00:14:17.389 Flush (00h): Supported LBA-Change 00:14:17.389 Write (01h): Supported LBA-Change 00:14:17.389 Read (02h): Supported 00:14:17.389 Compare (05h): Supported 00:14:17.389 Write Zeroes (08h): Supported LBA-Change 00:14:17.389 Dataset Management (09h): Supported LBA-Change 00:14:17.389 Copy (19h): Supported LBA-Change 00:14:17.389 Unknown (79h): Supported LBA-Change 00:14:17.389 Unknown (7Ah): Supported 00:14:17.389 00:14:17.389 Error Log 00:14:17.389 ========= 00:14:17.389 00:14:17.389 Arbitration 00:14:17.389 =========== 00:14:17.389 Arbitration Burst: 1 00:14:17.389 00:14:17.389 Power Management 00:14:17.389 ================ 00:14:17.389 Number of Power States: 1 00:14:17.389 Current Power State: Power State #0 00:14:17.389 Power State #0: 00:14:17.389 Max Power: 0.00 W 00:14:17.389 Non-Operational State: Operational 00:14:17.389 Entry Latency: Not Reported 00:14:17.389 Exit Latency: Not Reported 00:14:17.389 Relative Read Throughput: 0 00:14:17.389 Relative Read Latency: 0 00:14:17.389 Relative Write Throughput: 0 00:14:17.389 Relative Write Latency: 0 00:14:17.389 Idle Power: Not Reported 00:14:17.389 Active Power: Not Reported 00:14:17.389 Non-Operational Permissive Mode: Not Supported 00:14:17.389 00:14:17.389 Health Information 00:14:17.389 ================== 00:14:17.389 Critical Warnings: 00:14:17.389 Available Spare Space: OK 00:14:17.389 Temperature: OK 00:14:17.389 Device Reliability: OK 00:14:17.389 Read Only: No 00:14:17.389 Volatile Memory Backup: OK 00:14:17.389 Current Temperature: 0 Kelvin (-2[2024-05-15 01:42:41.077678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:17.389 [2024-05-15 01:42:41.077695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:17.389 [2024-05-15 01:42:41.077732] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:17.389 [2024-05-15 01:42:41.077749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.389 [2024-05-15 01:42:41.077760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.389 [2024-05-15 01:42:41.077770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.389 [2024-05-15 01:42:41.077780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.389 [2024-05-15 01:42:41.080244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:17.389 [2024-05-15 01:42:41.080268] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:17.389 [2024-05-15 01:42:41.080307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:17.389 [2024-05-15 01:42:41.080383] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:17.389 [2024-05-15 01:42:41.080403] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:17.389 [2024-05-15 01:42:41.081319] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:17.389 [2024-05-15 01:42:41.081343] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:17.389 [2024-05-15 01:42:41.081399] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:17.389 [2024-05-15 01:42:41.085234] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:17.389 73 Celsius) 00:14:17.389 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:17.389 Available Spare: 0% 00:14:17.389 Available Spare Threshold: 0% 00:14:17.389 Life Percentage Used: 0% 00:14:17.389 Data Units Read: 0 00:14:17.389 Data Units Written: 0 00:14:17.389 Host Read Commands: 0 00:14:17.389 Host Write Commands: 0 00:14:17.389 Controller Busy Time: 0 minutes 00:14:17.389 Power Cycles: 0 00:14:17.389 Power On Hours: 0 hours 00:14:17.389 Unsafe Shutdowns: 0 00:14:17.389 Unrecoverable Media Errors: 0 00:14:17.389 Lifetime Error Log Entries: 0 00:14:17.389 Warning Temperature Time: 0 minutes 00:14:17.389 Critical Temperature Time: 0 minutes 00:14:17.389 00:14:17.389 Number of Queues 00:14:17.389 ================ 00:14:17.389 Number of I/O Submission Queues: 127 00:14:17.389 Number of I/O Completion Queues: 127 00:14:17.389 00:14:17.389 Active Namespaces 00:14:17.389 ================= 00:14:17.389 Namespace ID:1 00:14:17.389 Error Recovery Timeout: Unlimited 00:14:17.389 Command Set Identifier: NVM (00h) 00:14:17.389 Deallocate: Supported 00:14:17.389 Deallocated/Unwritten Error: Not Supported 00:14:17.389 Deallocated Read Value: Unknown 00:14:17.389 Deallocate in Write Zeroes: Not Supported 00:14:17.389 Deallocated Guard Field: 0xFFFF 00:14:17.389 Flush: Supported 00:14:17.389 Reservation: Supported 00:14:17.389 Namespace Sharing Capabilities: Multiple Controllers 00:14:17.389 Size (in LBAs): 131072 (0GiB) 00:14:17.389 Capacity (in LBAs): 131072 (0GiB) 00:14:17.389 Utilization (in LBAs): 131072 (0GiB) 00:14:17.389 NGUID: E051EAEA063B491D87A058CDD827EE97 00:14:17.389 UUID: e051eaea-063b-491d-87a0-58cdd827ee97 00:14:17.389 Thin Provisioning: Not Supported 00:14:17.389 Per-NS Atomic Units: Yes 00:14:17.389 Atomic Boundary Size (Normal): 0 00:14:17.389 Atomic Boundary Size (PFail): 0 00:14:17.389 Atomic Boundary Offset: 0 00:14:17.389 Maximum Single Source Range Length: 65535 00:14:17.389 Maximum Copy Length: 65535 00:14:17.389 Maximum Source Range Count: 1 00:14:17.389 NGUID/EUI64 Never Reused: No 00:14:17.389 Namespace Write Protected: No 00:14:17.389 Number of LBA Formats: 1 00:14:17.389 Current LBA Format: LBA Format #00 00:14:17.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:17.389 00:14:17.389 01:42:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:17.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.648 [2024-05-15 01:42:41.317835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.911 Initializing NVMe Controllers 00:14:22.911 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:22.911 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:22.911 Initialization complete. Launching workers. 00:14:22.911 ======================================================== 00:14:22.911 Latency(us) 00:14:22.911 Device Information : IOPS MiB/s Average min max 00:14:22.911 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34803.17 135.95 3677.60 1167.62 8256.20 00:14:22.911 ======================================================== 00:14:22.911 Total : 34803.17 135.95 3677.60 1167.62 8256.20 00:14:22.911 00:14:22.911 [2024-05-15 01:42:46.339764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.911 01:42:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:22.911 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.911 [2024-05-15 01:42:46.577922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.192 Initializing NVMe Controllers 00:14:28.192 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:28.192 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:28.192 Initialization complete. Launching workers. 00:14:28.192 ======================================================== 00:14:28.192 Latency(us) 00:14:28.192 Device Information : IOPS MiB/s Average min max 00:14:28.192 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.66 62.50 8005.40 6963.42 11985.99 00:14:28.192 ======================================================== 00:14:28.192 Total : 15999.66 62.50 8005.40 6963.42 11985.99 00:14:28.192 00:14:28.192 [2024-05-15 01:42:51.626727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.192 01:42:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:28.192 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.192 [2024-05-15 01:42:51.848840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.467 [2024-05-15 01:42:56.920624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.467 Initializing NVMe Controllers 00:14:33.467 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:33.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:33.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:33.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:33.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:33.467 Initialization complete. Launching workers. 00:14:33.467 Starting thread on core 2 00:14:33.467 Starting thread on core 3 00:14:33.467 Starting thread on core 1 00:14:33.467 01:42:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:33.467 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.467 [2024-05-15 01:42:57.237692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.749 [2024-05-15 01:43:00.304504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:36.749 Initializing NVMe Controllers 00:14:36.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:36.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:36.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:36.749 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:36.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:36.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:36.749 Initialization complete. Launching workers. 00:14:36.749 Starting thread on core 1 with urgent priority queue 00:14:36.749 Starting thread on core 2 with urgent priority queue 00:14:36.749 Starting thread on core 3 with urgent priority queue 00:14:36.749 Starting thread on core 0 with urgent priority queue 00:14:36.749 SPDK bdev Controller (SPDK1 ) core 0: 5541.00 IO/s 18.05 secs/100000 ios 00:14:36.749 SPDK bdev Controller (SPDK1 ) core 1: 5442.67 IO/s 18.37 secs/100000 ios 00:14:36.749 SPDK bdev Controller (SPDK1 ) core 2: 5230.33 IO/s 19.12 secs/100000 ios 00:14:36.749 SPDK bdev Controller (SPDK1 ) core 3: 5207.33 IO/s 19.20 secs/100000 ios 00:14:36.749 ======================================================== 00:14:36.749 00:14:36.749 01:43:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:36.749 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.749 [2024-05-15 01:43:00.613810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:36.749 Initializing NVMe Controllers 00:14:36.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:36.749 Namespace ID: 1 size: 0GB 00:14:36.749 Initialization complete. 00:14:36.749 INFO: using host memory buffer for IO 00:14:36.749 Hello world! 00:14:36.749 [2024-05-15 01:43:00.648446] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.006 01:43:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:37.006 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.264 [2024-05-15 01:43:00.960775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.197 Initializing NVMe Controllers 00:14:38.197 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.197 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.197 Initialization complete. Launching workers. 00:14:38.197 submit (in ns) avg, min, max = 6911.4, 3545.6, 4002354.4 00:14:38.197 complete (in ns) avg, min, max = 26615.2, 2075.6, 5011470.0 00:14:38.197 00:14:38.197 Submit histogram 00:14:38.197 ================ 00:14:38.197 Range in us Cumulative Count 00:14:38.197 3.532 - 3.556: 0.0074% ( 1) 00:14:38.197 3.556 - 3.579: 0.6704% ( 89) 00:14:38.197 3.579 - 3.603: 6.9130% ( 838) 00:14:38.197 3.603 - 3.627: 17.1558% ( 1375) 00:14:38.197 3.627 - 3.650: 29.5664% ( 1666) 00:14:38.197 3.650 - 3.674: 38.7515% ( 1233) 00:14:38.197 3.674 - 3.698: 46.2232% ( 1003) 00:14:38.197 3.698 - 3.721: 51.4899% ( 707) 00:14:38.197 3.721 - 3.745: 56.2053% ( 633) 00:14:38.197 3.745 - 3.769: 59.8853% ( 494) 00:14:38.197 3.769 - 3.793: 62.5298% ( 355) 00:14:38.197 3.793 - 3.816: 64.9434% ( 324) 00:14:38.197 3.816 - 3.840: 67.9380% ( 402) 00:14:38.197 3.840 - 3.864: 72.5715% ( 622) 00:14:38.197 3.864 - 3.887: 77.6892% ( 687) 00:14:38.197 3.887 - 3.911: 81.9949% ( 578) 00:14:38.197 3.911 - 3.935: 84.9374% ( 395) 00:14:38.197 3.935 - 3.959: 86.8072% ( 251) 00:14:38.197 3.959 - 3.982: 88.2524% ( 194) 00:14:38.197 3.982 - 4.006: 89.8018% ( 208) 00:14:38.197 4.006 - 4.030: 90.8671% ( 143) 00:14:38.197 4.030 - 4.053: 91.8057% ( 126) 00:14:38.197 4.053 - 4.077: 92.7071% ( 121) 00:14:38.197 4.077 - 4.101: 93.7053% ( 134) 00:14:38.197 4.101 - 4.124: 94.5918% ( 119) 00:14:38.197 4.124 - 4.148: 95.3516% ( 102) 00:14:38.197 4.148 - 4.172: 95.8060% ( 61) 00:14:38.197 4.172 - 4.196: 96.0295% ( 30) 00:14:38.197 4.196 - 4.219: 96.3275% ( 40) 00:14:38.197 4.219 - 4.243: 96.5435% ( 29) 00:14:38.197 4.243 - 4.267: 96.7372% ( 26) 00:14:38.197 4.267 - 4.290: 96.9756% ( 32) 00:14:38.197 4.290 - 4.314: 97.0948% ( 16) 00:14:38.197 4.314 - 4.338: 97.2065% ( 15) 00:14:38.197 4.338 - 4.361: 97.2884% ( 11) 00:14:38.197 4.361 - 4.385: 97.3704% ( 11) 00:14:38.197 4.385 - 4.409: 97.4225% ( 7) 00:14:38.197 4.409 - 4.433: 97.4747% ( 7) 00:14:38.197 4.433 - 4.456: 97.5194% ( 6) 00:14:38.197 4.456 - 4.480: 97.5790% ( 8) 00:14:38.197 4.480 - 4.504: 97.6237% ( 6) 00:14:38.197 4.504 - 4.527: 97.6535% ( 4) 00:14:38.197 4.527 - 4.551: 97.6609% ( 1) 00:14:38.197 4.551 - 4.575: 97.6758% ( 2) 00:14:38.197 4.575 - 4.599: 97.6907% ( 2) 00:14:38.197 4.599 - 4.622: 97.6982% ( 1) 00:14:38.197 4.622 - 4.646: 97.7205% ( 3) 00:14:38.197 4.646 - 4.670: 97.7279% ( 1) 00:14:38.197 4.670 - 4.693: 97.7428% ( 2) 00:14:38.197 4.693 - 4.717: 97.7503% ( 1) 00:14:38.197 4.741 - 4.764: 97.7652% ( 2) 00:14:38.197 4.764 - 4.788: 97.8024% ( 5) 00:14:38.197 4.788 - 4.812: 97.8397% ( 5) 00:14:38.197 4.812 - 4.836: 97.8620% ( 3) 00:14:38.197 4.836 - 4.859: 97.9365% ( 10) 00:14:38.197 4.859 - 4.883: 97.9514% ( 2) 00:14:38.197 4.883 - 4.907: 97.9961% ( 6) 00:14:38.197 4.907 - 4.930: 98.0334% ( 5) 00:14:38.197 4.930 - 4.954: 98.0706% ( 5) 00:14:38.197 4.954 - 4.978: 98.0930% ( 3) 00:14:38.197 4.978 - 5.001: 98.1377% ( 6) 00:14:38.197 5.001 - 5.025: 98.1675% ( 4) 00:14:38.197 5.025 - 5.049: 98.1898% ( 3) 00:14:38.197 5.049 - 5.073: 98.2494% ( 8) 00:14:38.197 5.073 - 5.096: 98.2867% ( 5) 00:14:38.197 5.096 - 5.120: 98.3239% ( 5) 00:14:38.197 5.120 - 5.144: 98.3686% ( 6) 00:14:38.197 5.144 - 5.167: 98.3984% ( 4) 00:14:38.197 5.167 - 5.191: 98.4133% ( 2) 00:14:38.197 5.191 - 5.215: 98.4505% ( 5) 00:14:38.197 5.215 - 5.239: 98.4729% ( 3) 00:14:38.197 5.239 - 5.262: 98.4878% ( 2) 00:14:38.197 5.262 - 5.286: 98.4952% ( 1) 00:14:38.197 5.286 - 5.310: 98.5027% ( 1) 00:14:38.197 5.310 - 5.333: 98.5101% ( 1) 00:14:38.197 5.404 - 5.428: 98.5250% ( 2) 00:14:38.197 5.452 - 5.476: 98.5474% ( 3) 00:14:38.197 5.476 - 5.499: 98.5697% ( 3) 00:14:38.197 5.594 - 5.618: 98.5921% ( 3) 00:14:38.197 5.713 - 5.736: 98.5995% ( 1) 00:14:38.197 5.973 - 5.997: 98.6144% ( 2) 00:14:38.197 5.997 - 6.021: 98.6219% ( 1) 00:14:38.197 6.116 - 6.163: 98.6442% ( 3) 00:14:38.197 6.684 - 6.732: 98.6517% ( 1) 00:14:38.197 6.827 - 6.874: 98.6591% ( 1) 00:14:38.197 7.253 - 7.301: 98.6666% ( 1) 00:14:38.197 7.301 - 7.348: 98.6740% ( 1) 00:14:38.197 7.396 - 7.443: 98.6815% ( 1) 00:14:38.197 7.490 - 7.538: 98.6964% ( 2) 00:14:38.197 7.585 - 7.633: 98.7113% ( 2) 00:14:38.197 7.633 - 7.680: 98.7187% ( 1) 00:14:38.197 7.775 - 7.822: 98.7262% ( 1) 00:14:38.197 8.012 - 8.059: 98.7336% ( 1) 00:14:38.197 8.059 - 8.107: 98.7411% ( 1) 00:14:38.197 8.107 - 8.154: 98.7485% ( 1) 00:14:38.197 8.154 - 8.201: 98.7560% ( 1) 00:14:38.197 8.344 - 8.391: 98.7634% ( 1) 00:14:38.197 8.486 - 8.533: 98.7709% ( 1) 00:14:38.197 8.533 - 8.581: 98.7783% ( 1) 00:14:38.197 8.581 - 8.628: 98.7932% ( 2) 00:14:38.197 8.723 - 8.770: 98.8081% ( 2) 00:14:38.197 8.770 - 8.818: 98.8156% ( 1) 00:14:38.197 8.913 - 8.960: 98.8305% ( 2) 00:14:38.197 9.055 - 9.102: 98.8379% ( 1) 00:14:38.197 9.102 - 9.150: 98.8454% ( 1) 00:14:38.197 9.197 - 9.244: 98.8603% ( 2) 00:14:38.198 9.292 - 9.339: 98.8677% ( 1) 00:14:38.198 9.434 - 9.481: 98.8751% ( 1) 00:14:38.198 9.576 - 9.624: 98.8826% ( 1) 00:14:38.198 9.624 - 9.671: 98.8900% ( 1) 00:14:38.198 9.766 - 9.813: 98.9049% ( 2) 00:14:38.198 10.098 - 10.145: 98.9124% ( 1) 00:14:38.198 10.335 - 10.382: 98.9198% ( 1) 00:14:38.198 10.524 - 10.572: 98.9273% ( 1) 00:14:38.198 10.761 - 10.809: 98.9347% ( 1) 00:14:38.198 11.425 - 11.473: 98.9422% ( 1) 00:14:38.198 12.421 - 12.516: 98.9496% ( 1) 00:14:38.198 12.516 - 12.610: 98.9571% ( 1) 00:14:38.198 12.705 - 12.800: 98.9645% ( 1) 00:14:38.198 13.464 - 13.559: 98.9720% ( 1) 00:14:38.198 14.601 - 14.696: 98.9794% ( 1) 00:14:38.198 14.791 - 14.886: 98.9869% ( 1) 00:14:38.198 16.877 - 16.972: 98.9943% ( 1) 00:14:38.198 17.161 - 17.256: 99.0092% ( 2) 00:14:38.198 17.256 - 17.351: 99.0241% ( 2) 00:14:38.198 17.351 - 17.446: 99.0465% ( 3) 00:14:38.198 17.446 - 17.541: 99.0763% ( 4) 00:14:38.198 17.541 - 17.636: 99.0912% ( 2) 00:14:38.198 17.636 - 17.730: 99.1359% ( 6) 00:14:38.198 17.730 - 17.825: 99.1731% ( 5) 00:14:38.198 17.825 - 17.920: 99.2104% ( 5) 00:14:38.198 17.920 - 18.015: 99.2551% ( 6) 00:14:38.198 18.015 - 18.110: 99.3072% ( 7) 00:14:38.198 18.110 - 18.204: 99.3743% ( 9) 00:14:38.198 18.204 - 18.299: 99.4636% ( 12) 00:14:38.198 18.299 - 18.394: 99.5009% ( 5) 00:14:38.198 18.394 - 18.489: 99.5679% ( 9) 00:14:38.198 18.489 - 18.584: 99.6052% ( 5) 00:14:38.198 18.584 - 18.679: 99.6871% ( 11) 00:14:38.198 18.679 - 18.773: 99.7020% ( 2) 00:14:38.198 18.773 - 18.868: 99.7393% ( 5) 00:14:38.198 18.868 - 18.963: 99.7691% ( 4) 00:14:38.198 19.058 - 19.153: 99.7914% ( 3) 00:14:38.198 19.153 - 19.247: 99.8063% ( 2) 00:14:38.198 19.247 - 19.342: 99.8287% ( 3) 00:14:38.198 19.342 - 19.437: 99.8510% ( 3) 00:14:38.198 19.437 - 19.532: 99.8585% ( 1) 00:14:38.198 19.627 - 19.721: 99.8659% ( 1) 00:14:38.198 19.721 - 19.816: 99.8734% ( 1) 00:14:38.198 19.911 - 20.006: 99.8808% ( 1) 00:14:38.198 20.290 - 20.385: 99.8883% ( 1) 00:14:38.198 20.385 - 20.480: 99.8957% ( 1) 00:14:38.198 21.807 - 21.902: 99.9032% ( 1) 00:14:38.198 21.902 - 21.997: 99.9106% ( 1) 00:14:38.198 23.324 - 23.419: 99.9181% ( 1) 00:14:38.198 25.790 - 25.979: 99.9255% ( 1) 00:14:38.198 3980.705 - 4004.978: 100.0000% ( 10) 00:14:38.198 00:14:38.198 Complete histogram 00:14:38.198 ================== 00:14:38.198 Range in us Cumulative Count 00:14:38.198 2.074 - 2.086: 4.5218% ( 607) 00:14:38.198 2.086 - 2.098: 26.6612% ( 2972) 00:14:38.198 2.098 - 2.110: 30.1251% ( 465) 00:14:38.198 2.110 - 2.121: 44.6067% ( 1944) 00:14:38.198 2.121 - 2.133: 58.2092% ( 1826) 00:14:38.198 2.133 - 2.145: 59.7661% ( 209) 00:14:38.198 2.145 - 2.157: 64.9136% ( 691) 00:14:38.198 2.157 - 2.169: 70.5751% ( 760) 00:14:38.198 2.169 - 2.181: 71.4318% ( 115) 00:14:38.198 2.181 - 2.193: 76.0653% ( 622) 00:14:38.198 2.193 - 2.204: 79.6484% ( 481) 00:14:38.198 2.204 - 2.216: 80.3337% ( 92) 00:14:38.198 2.216 - 2.228: 82.7995% ( 331) 00:14:38.198 2.228 - 2.240: 86.2187% ( 459) 00:14:38.198 2.240 - 2.252: 87.2318% ( 136) 00:14:38.198 2.252 - 2.264: 90.2563% ( 406) 00:14:38.198 2.264 - 2.276: 92.8710% ( 351) 00:14:38.198 2.276 - 2.287: 93.3254% ( 61) 00:14:38.198 2.287 - 2.299: 93.9884% ( 89) 00:14:38.198 2.299 - 2.311: 94.4651% ( 64) 00:14:38.198 2.311 - 2.323: 94.6514% ( 25) 00:14:38.198 2.323 - 2.335: 95.0089% ( 48) 00:14:38.198 2.335 - 2.347: 95.2846% ( 37) 00:14:38.198 2.347 - 2.359: 95.4336% ( 20) 00:14:38.198 2.359 - 2.370: 95.5825% ( 20) 00:14:38.198 2.370 - 2.382: 95.7762% ( 26) 00:14:38.198 2.382 - 2.394: 96.0816% ( 41) 00:14:38.198 2.394 - 2.406: 96.3424% ( 35) 00:14:38.198 2.406 - 2.418: 96.6701% ( 44) 00:14:38.198 2.418 - 2.430: 96.9383% ( 36) 00:14:38.198 2.430 - 2.441: 97.1916% ( 34) 00:14:38.198 2.441 - 2.453: 97.4002% ( 28) 00:14:38.198 2.453 - 2.465: 97.5790% ( 24) 00:14:38.198 2.465 - 2.477: 97.7279% ( 20) 00:14:38.198 2.477 - 2.489: 97.8099% ( 11) 00:14:38.198 2.489 - 2.501: 97.9291% ( 16) 00:14:38.198 2.501 - 2.513: 98.0036% ( 10) 00:14:38.198 2.513 - 2.524: 98.1004% ( 13) 00:14:38.198 2.524 - 2.536: 98.1526% ( 7) 00:14:38.198 2.536 - 2.548: 98.1824% ( 4) 00:14:38.198 2.548 - 2.560: 98.2196% ( 5) 00:14:38.198 2.560 - 2.572: 98.2420% ( 3) 00:14:38.198 2.596 - 2.607: 98.2718% ( 4) 00:14:38.198 2.607 - 2.619: 98.2792% ( 1) 00:14:38.198 2.631 - 2.643: 98.2867% ( 1) 00:14:38.198 2.643 - 2.655: 98.3090% ( 3) 00:14:38.198 2.655 - 2.667: 98.3164% ( 1) 00:14:38.198 2.667 - 2.679: 98.3239% ( 1) 00:14:38.198 2.679 - 2.690: 98.3313% ( 1) 00:14:38.198 2.750 - 2.761: 98.3388% ( 1) 00:14:38.198 2.773 - 2.785: 98.3462% ( 1) 00:14:38.198 2.785 - 2.797: 98.3537% ( 1) 00:14:38.198 2.797 - 2.809: 98.3686% ( 2) 00:14:38.198 2.844 - 2.856: 98.3760% ( 1) 00:14:38.198 2.927 - 2.939: 98.3835% ( 1) 00:14:38.198 2.963 - 2.975: 98.3909% ( 1) 00:14:38.198 2.975 - 2.987: 98.3984% ( 1) 00:14:38.198 2.987 - 2.999: 98.4058% ( 1) 00:14:38.198 3.153 - 3.176: 98.4133% ( 1) 00:14:38.198 3.224 - 3.247: 98.4207% ( 1) 00:14:38.198 3.247 - 3.271: 98.4282% ( 1) 00:14:38.198 3.271 - 3.295: 98.4356% ( 1) 00:14:38.198 3.295 - 3.319: 98.4580% ( 3) 00:14:38.198 3.342 - 3.366: 98.4803% ( 3) 00:14:38.198 3.366 - 3.390: 98.4952% ( 2) 00:14:38.198 3.390 - 3.413: 98.5101% ( 2) 00:14:38.198 3.461 - 3.484: 98.5176% ( 1) 00:14:38.198 3.484 - 3.508: 98.5250% ( 1) 00:14:38.198 3.508 - 3.532: 98.5399% ( 2) 00:14:38.198 3.532 - 3.556: 98.5474% ( 1) 00:14:38.198 3.556 - 3.579: 98.5623% ( 2) 00:14:38.198 3.627 - 3.650: 98.5772% ( 2) 00:14:38.198 3.650 - 3.674: 98.5846% ( 1) 00:14:38.198 3.721 - 3.745: 98.5921% ( 1) 00:14:38.198 3.840 - 3.864: 98.5995% ( 1) 00:14:38.198 3.911 - 3.935: 98.6070% ( 1) 00:14:38.198 4.053 - 4.077: 98.6144% ( 1) 00:14:38.198 4.124 - 4.148: 98.6219% ( 1) 00:14:38.198 5.499 - 5.523: 98.6293% ( 1) 00:14:38.198 5.594 - 5.618: 98.6368% ( 1) 00:14:38.198 5.641 - 5.665: 98.6442% ( 1) 00:14:38.198 5.902 - 5.926: 98.6517% ( 1) 00:14:38.198 5.950 - 5.973: 98.6591% ( 1) 00:14:38.198 6.068 - 6.116: 98.6740% ( 2) 00:14:38.198 6.258 - 6.305: 98.6889% ( 2) 00:14:38.198 6.305 - 6.353: 98.7038% ( 2) 00:14:38.198 6.542 - 6.590: 98.7187% ( 2) 00:14:38.198 6.590 - 6.637: 98.7262% ( 1) 00:14:38.198 6.732 - 6.779: 98.7411% ( 2) 00:14:38.198 6.921 - 6.969: 98.7560% ( 2) 00:14:38.198 6.969 - 7.016: 98.7634% ( 1) 00:14:38.198 7.064 - 7.111: 98.7709% ( 1) 00:14:38.198 7.159 - 7.206: 98.7783% ( 1) 00:14:38.198 7.253 - 7.301: 98.7858% ( 1) 00:14:38.198 7.396 - 7.443: 9[2024-05-15 01:43:01.982877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.198 8.8007% ( 2) 00:14:38.198 7.490 - 7.538: 98.8081% ( 1) 00:14:38.198 7.585 - 7.633: 98.8156% ( 1) 00:14:38.198 7.964 - 8.012: 98.8230% ( 1) 00:14:38.198 11.236 - 11.283: 98.8305% ( 1) 00:14:38.198 11.757 - 11.804: 98.8379% ( 1) 00:14:38.198 13.369 - 13.464: 98.8454% ( 1) 00:14:38.198 15.455 - 15.550: 98.8528% ( 1) 00:14:38.198 15.550 - 15.644: 98.8603% ( 1) 00:14:38.198 15.644 - 15.739: 98.8677% ( 1) 00:14:38.198 15.739 - 15.834: 98.8826% ( 2) 00:14:38.198 15.834 - 15.929: 98.9049% ( 3) 00:14:38.198 15.929 - 16.024: 98.9273% ( 3) 00:14:38.198 16.024 - 16.119: 98.9869% ( 8) 00:14:38.198 16.119 - 16.213: 99.0018% ( 2) 00:14:38.198 16.213 - 16.308: 99.0539% ( 7) 00:14:38.198 16.308 - 16.403: 99.0912% ( 5) 00:14:38.198 16.403 - 16.498: 99.1210% ( 4) 00:14:38.198 16.498 - 16.593: 99.1508% ( 4) 00:14:38.198 16.593 - 16.687: 99.1806% ( 4) 00:14:38.198 16.687 - 16.782: 99.2029% ( 3) 00:14:38.198 16.782 - 16.877: 99.2551% ( 7) 00:14:38.198 16.877 - 16.972: 99.2774% ( 3) 00:14:38.198 16.972 - 17.067: 99.2849% ( 1) 00:14:38.198 17.067 - 17.161: 99.2923% ( 1) 00:14:38.198 17.161 - 17.256: 99.2998% ( 1) 00:14:38.198 17.256 - 17.351: 99.3147% ( 2) 00:14:38.198 17.351 - 17.446: 99.3296% ( 2) 00:14:38.198 17.446 - 17.541: 99.3370% ( 1) 00:14:38.198 17.541 - 17.636: 99.3519% ( 2) 00:14:38.198 17.636 - 17.730: 99.3594% ( 1) 00:14:38.198 17.730 - 17.825: 99.3668% ( 1) 00:14:38.198 17.825 - 17.920: 99.3743% ( 1) 00:14:38.198 18.204 - 18.299: 99.3817% ( 1) 00:14:38.198 19.153 - 19.247: 99.3892% ( 1) 00:14:38.198 3009.801 - 3021.938: 99.3966% ( 1) 00:14:38.198 3070.483 - 3082.619: 99.4041% ( 1) 00:14:38.198 3980.705 - 4004.978: 99.8808% ( 64) 00:14:38.198 4004.978 - 4029.250: 99.9926% ( 15) 00:14:38.198 5000.154 - 5024.427: 100.0000% ( 1) 00:14:38.198 00:14:38.198 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:38.199 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:38.199 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:38.199 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:38.199 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:38.457 [ 00:14:38.457 { 00:14:38.457 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:38.457 "subtype": "Discovery", 00:14:38.457 "listen_addresses": [], 00:14:38.457 "allow_any_host": true, 00:14:38.457 "hosts": [] 00:14:38.457 }, 00:14:38.457 { 00:14:38.457 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:38.457 "subtype": "NVMe", 00:14:38.457 "listen_addresses": [ 00:14:38.457 { 00:14:38.457 "trtype": "VFIOUSER", 00:14:38.457 "adrfam": "IPv4", 00:14:38.457 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:38.457 "trsvcid": "0" 00:14:38.457 } 00:14:38.457 ], 00:14:38.457 "allow_any_host": true, 00:14:38.457 "hosts": [], 00:14:38.457 "serial_number": "SPDK1", 00:14:38.457 "model_number": "SPDK bdev Controller", 00:14:38.457 "max_namespaces": 32, 00:14:38.457 "min_cntlid": 1, 00:14:38.457 "max_cntlid": 65519, 00:14:38.457 "namespaces": [ 00:14:38.457 { 00:14:38.457 "nsid": 1, 00:14:38.457 "bdev_name": "Malloc1", 00:14:38.457 "name": "Malloc1", 00:14:38.457 "nguid": "E051EAEA063B491D87A058CDD827EE97", 00:14:38.457 "uuid": "e051eaea-063b-491d-87a0-58cdd827ee97" 00:14:38.457 } 00:14:38.457 ] 00:14:38.457 }, 00:14:38.457 { 00:14:38.457 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:38.457 "subtype": "NVMe", 00:14:38.457 "listen_addresses": [ 00:14:38.457 { 00:14:38.457 "trtype": "VFIOUSER", 00:14:38.457 "adrfam": "IPv4", 00:14:38.457 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:38.457 "trsvcid": "0" 00:14:38.457 } 00:14:38.457 ], 00:14:38.457 "allow_any_host": true, 00:14:38.457 "hosts": [], 00:14:38.457 "serial_number": "SPDK2", 00:14:38.457 "model_number": "SPDK bdev Controller", 00:14:38.457 "max_namespaces": 32, 00:14:38.457 "min_cntlid": 1, 00:14:38.457 "max_cntlid": 65519, 00:14:38.457 "namespaces": [ 00:14:38.457 { 00:14:38.457 "nsid": 1, 00:14:38.457 "bdev_name": "Malloc2", 00:14:38.457 "name": "Malloc2", 00:14:38.457 "nguid": "667B11C34B70452EB00413EAED5941D9", 00:14:38.457 "uuid": "667b11c3-4b70-452e-b004-13eaed5941d9" 00:14:38.457 } 00:14:38.457 ] 00:14:38.457 } 00:14:38.457 ] 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4012488 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:38.457 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:38.457 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.715 [2024-05-15 01:43:02.441710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.715 Malloc3 00:14:38.715 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:38.972 [2024-05-15 01:43:02.796130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.972 01:43:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:38.972 Asynchronous Event Request test 00:14:38.972 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.972 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.972 Registering asynchronous event callbacks... 00:14:38.972 Starting namespace attribute notice tests for all controllers... 00:14:38.972 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:38.972 aer_cb - Changed Namespace 00:14:38.972 Cleaning up... 00:14:39.230 [ 00:14:39.230 { 00:14:39.230 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:39.230 "subtype": "Discovery", 00:14:39.231 "listen_addresses": [], 00:14:39.231 "allow_any_host": true, 00:14:39.231 "hosts": [] 00:14:39.231 }, 00:14:39.231 { 00:14:39.231 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:39.231 "subtype": "NVMe", 00:14:39.231 "listen_addresses": [ 00:14:39.231 { 00:14:39.231 "trtype": "VFIOUSER", 00:14:39.231 "adrfam": "IPv4", 00:14:39.231 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:39.231 "trsvcid": "0" 00:14:39.231 } 00:14:39.231 ], 00:14:39.231 "allow_any_host": true, 00:14:39.231 "hosts": [], 00:14:39.231 "serial_number": "SPDK1", 00:14:39.231 "model_number": "SPDK bdev Controller", 00:14:39.231 "max_namespaces": 32, 00:14:39.231 "min_cntlid": 1, 00:14:39.231 "max_cntlid": 65519, 00:14:39.231 "namespaces": [ 00:14:39.231 { 00:14:39.231 "nsid": 1, 00:14:39.231 "bdev_name": "Malloc1", 00:14:39.231 "name": "Malloc1", 00:14:39.231 "nguid": "E051EAEA063B491D87A058CDD827EE97", 00:14:39.231 "uuid": "e051eaea-063b-491d-87a0-58cdd827ee97" 00:14:39.231 }, 00:14:39.231 { 00:14:39.231 "nsid": 2, 00:14:39.231 "bdev_name": "Malloc3", 00:14:39.231 "name": "Malloc3", 00:14:39.231 "nguid": "A023FBC89B11418793F023B8A397BC09", 00:14:39.231 "uuid": "a023fbc8-9b11-4187-93f0-23b8a397bc09" 00:14:39.231 } 00:14:39.231 ] 00:14:39.231 }, 00:14:39.231 { 00:14:39.231 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:39.231 "subtype": "NVMe", 00:14:39.231 "listen_addresses": [ 00:14:39.231 { 00:14:39.231 "trtype": "VFIOUSER", 00:14:39.231 "adrfam": "IPv4", 00:14:39.231 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:39.231 "trsvcid": "0" 00:14:39.231 } 00:14:39.231 ], 00:14:39.231 "allow_any_host": true, 00:14:39.231 "hosts": [], 00:14:39.231 "serial_number": "SPDK2", 00:14:39.231 "model_number": "SPDK bdev Controller", 00:14:39.231 "max_namespaces": 32, 00:14:39.231 "min_cntlid": 1, 00:14:39.231 "max_cntlid": 65519, 00:14:39.231 "namespaces": [ 00:14:39.231 { 00:14:39.231 "nsid": 1, 00:14:39.231 "bdev_name": "Malloc2", 00:14:39.231 "name": "Malloc2", 00:14:39.231 "nguid": "667B11C34B70452EB00413EAED5941D9", 00:14:39.231 "uuid": "667b11c3-4b70-452e-b004-13eaed5941d9" 00:14:39.231 } 00:14:39.231 ] 00:14:39.231 } 00:14:39.231 ] 00:14:39.231 01:43:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4012488 00:14:39.231 01:43:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.231 01:43:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:39.231 01:43:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:39.231 01:43:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:39.231 [2024-05-15 01:43:03.108457] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:14:39.231 [2024-05-15 01:43:03.108494] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012620 ] 00:14:39.231 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.231 [2024-05-15 01:43:03.142849] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:39.231 [2024-05-15 01:43:03.150598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:39.231 [2024-05-15 01:43:03.150629] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0185541000 00:14:39.231 [2024-05-15 01:43:03.151601] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.152604] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.153609] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.154612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.155617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.156628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.157630] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.158644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:39.231 [2024-05-15 01:43:03.159651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:39.231 [2024-05-15 01:43:03.159673] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f01842f7000 00:14:39.231 [2024-05-15 01:43:03.160834] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.490 [2024-05-15 01:43:03.177068] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:39.490 [2024-05-15 01:43:03.177107] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:39.490 [2024-05-15 01:43:03.182246] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:39.490 [2024-05-15 01:43:03.182305] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:39.490 [2024-05-15 01:43:03.182411] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:39.490 [2024-05-15 01:43:03.182439] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:39.490 [2024-05-15 01:43:03.182450] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:39.490 [2024-05-15 01:43:03.183240] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:39.490 [2024-05-15 01:43:03.183271] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:39.490 [2024-05-15 01:43:03.183284] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:39.490 [2024-05-15 01:43:03.184239] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:39.490 [2024-05-15 01:43:03.184265] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:39.490 [2024-05-15 01:43:03.184279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.490 [2024-05-15 01:43:03.185245] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:39.490 [2024-05-15 01:43:03.185272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.490 [2024-05-15 01:43:03.186259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:39.490 [2024-05-15 01:43:03.186280] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:39.490 [2024-05-15 01:43:03.186289] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:39.490 [2024-05-15 01:43:03.186301] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.490 [2024-05-15 01:43:03.186411] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:39.490 [2024-05-15 01:43:03.186420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.490 [2024-05-15 01:43:03.186429] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:39.490 [2024-05-15 01:43:03.187279] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:39.490 [2024-05-15 01:43:03.188270] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:39.490 [2024-05-15 01:43:03.189285] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:39.490 [2024-05-15 01:43:03.190273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:39.490 [2024-05-15 01:43:03.190361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.490 [2024-05-15 01:43:03.191301] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:39.490 [2024-05-15 01:43:03.191324] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.490 [2024-05-15 01:43:03.191338] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:39.490 [2024-05-15 01:43:03.191364] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:39.490 [2024-05-15 01:43:03.191379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.490 [2024-05-15 01:43:03.191406] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.490 [2024-05-15 01:43:03.191416] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.490 [2024-05-15 01:43:03.191437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.490 [2024-05-15 01:43:03.200234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:39.490 [2024-05-15 01:43:03.200260] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:39.490 [2024-05-15 01:43:03.200270] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:39.490 [2024-05-15 01:43:03.200278] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:39.490 [2024-05-15 01:43:03.200286] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:39.490 [2024-05-15 01:43:03.200294] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:39.490 [2024-05-15 01:43:03.200302] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:39.490 [2024-05-15 01:43:03.200311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:39.490 [2024-05-15 01:43:03.200331] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.490 [2024-05-15 01:43:03.200351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:39.490 [2024-05-15 01:43:03.208228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:39.490 [2024-05-15 01:43:03.208271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.491 [2024-05-15 01:43:03.208286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.491 [2024-05-15 01:43:03.208299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.491 [2024-05-15 01:43:03.208311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.491 [2024-05-15 01:43:03.208320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.208332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.208345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.216243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.216267] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:39.491 [2024-05-15 01:43:03.216283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.216296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.216307] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.216325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.224229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.224296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.224313] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.224328] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:39.491 [2024-05-15 01:43:03.224337] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:39.491 [2024-05-15 01:43:03.224347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.232229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.232262] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:39.491 [2024-05-15 01:43:03.232287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.232302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.232315] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.491 [2024-05-15 01:43:03.232323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.491 [2024-05-15 01:43:03.232334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.240227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.240256] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.240273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.240286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.491 [2024-05-15 01:43:03.240294] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.491 [2024-05-15 01:43:03.240305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.248231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.248260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.248280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.248296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.248307] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.248316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.248326] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:39.491 [2024-05-15 01:43:03.248334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:39.491 [2024-05-15 01:43:03.248343] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:39.491 [2024-05-15 01:43:03.248378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.256226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.256253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.264225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.264251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.272227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.272252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.280229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.280279] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:39.491 [2024-05-15 01:43:03.280290] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:39.491 [2024-05-15 01:43:03.280297] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:39.491 [2024-05-15 01:43:03.280303] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:39.491 [2024-05-15 01:43:03.280314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:39.491 [2024-05-15 01:43:03.280326] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:39.491 [2024-05-15 01:43:03.280334] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:39.491 [2024-05-15 01:43:03.280344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.280355] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:39.491 [2024-05-15 01:43:03.280363] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.491 [2024-05-15 01:43:03.280372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.280393] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:39.491 [2024-05-15 01:43:03.280403] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:39.491 [2024-05-15 01:43:03.280413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:39.491 [2024-05-15 01:43:03.288232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.288274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.288291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:39.491 [2024-05-15 01:43:03.288307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:39.491 ===================================================== 00:14:39.491 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:39.491 ===================================================== 00:14:39.491 Controller Capabilities/Features 00:14:39.491 ================================ 00:14:39.491 Vendor ID: 4e58 00:14:39.491 Subsystem Vendor ID: 4e58 00:14:39.491 Serial Number: SPDK2 00:14:39.491 Model Number: SPDK bdev Controller 00:14:39.491 Firmware Version: 24.05 00:14:39.491 Recommended Arb Burst: 6 00:14:39.491 IEEE OUI Identifier: 8d 6b 50 00:14:39.491 Multi-path I/O 00:14:39.491 May have multiple subsystem ports: Yes 00:14:39.491 May have multiple controllers: Yes 00:14:39.491 Associated with SR-IOV VF: No 00:14:39.491 Max Data Transfer Size: 131072 00:14:39.491 Max Number of Namespaces: 32 00:14:39.491 Max Number of I/O Queues: 127 00:14:39.491 NVMe Specification Version (VS): 1.3 00:14:39.491 NVMe Specification Version (Identify): 1.3 00:14:39.491 Maximum Queue Entries: 256 00:14:39.491 Contiguous Queues Required: Yes 00:14:39.491 Arbitration Mechanisms Supported 00:14:39.491 Weighted Round Robin: Not Supported 00:14:39.491 Vendor Specific: Not Supported 00:14:39.491 Reset Timeout: 15000 ms 00:14:39.491 Doorbell Stride: 4 bytes 00:14:39.491 NVM Subsystem Reset: Not Supported 00:14:39.491 Command Sets Supported 00:14:39.491 NVM Command Set: Supported 00:14:39.491 Boot Partition: Not Supported 00:14:39.491 Memory Page Size Minimum: 4096 bytes 00:14:39.491 Memory Page Size Maximum: 4096 bytes 00:14:39.492 Persistent Memory Region: Not Supported 00:14:39.492 Optional Asynchronous Events Supported 00:14:39.492 Namespace Attribute Notices: Supported 00:14:39.492 Firmware Activation Notices: Not Supported 00:14:39.492 ANA Change Notices: Not Supported 00:14:39.492 PLE Aggregate Log Change Notices: Not Supported 00:14:39.492 LBA Status Info Alert Notices: Not Supported 00:14:39.492 EGE Aggregate Log Change Notices: Not Supported 00:14:39.492 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.492 Zone Descriptor Change Notices: Not Supported 00:14:39.492 Discovery Log Change Notices: Not Supported 00:14:39.492 Controller Attributes 00:14:39.492 128-bit Host Identifier: Supported 00:14:39.492 Non-Operational Permissive Mode: Not Supported 00:14:39.492 NVM Sets: Not Supported 00:14:39.492 Read Recovery Levels: Not Supported 00:14:39.492 Endurance Groups: Not Supported 00:14:39.492 Predictable Latency Mode: Not Supported 00:14:39.492 Traffic Based Keep ALive: Not Supported 00:14:39.492 Namespace Granularity: Not Supported 00:14:39.492 SQ Associations: Not Supported 00:14:39.492 UUID List: Not Supported 00:14:39.492 Multi-Domain Subsystem: Not Supported 00:14:39.492 Fixed Capacity Management: Not Supported 00:14:39.492 Variable Capacity Management: Not Supported 00:14:39.492 Delete Endurance Group: Not Supported 00:14:39.492 Delete NVM Set: Not Supported 00:14:39.492 Extended LBA Formats Supported: Not Supported 00:14:39.492 Flexible Data Placement Supported: Not Supported 00:14:39.492 00:14:39.492 Controller Memory Buffer Support 00:14:39.492 ================================ 00:14:39.492 Supported: No 00:14:39.492 00:14:39.492 Persistent Memory Region Support 00:14:39.492 ================================ 00:14:39.492 Supported: No 00:14:39.492 00:14:39.492 Admin Command Set Attributes 00:14:39.492 ============================ 00:14:39.492 Security Send/Receive: Not Supported 00:14:39.492 Format NVM: Not Supported 00:14:39.492 Firmware Activate/Download: Not Supported 00:14:39.492 Namespace Management: Not Supported 00:14:39.492 Device Self-Test: Not Supported 00:14:39.492 Directives: Not Supported 00:14:39.492 NVMe-MI: Not Supported 00:14:39.492 Virtualization Management: Not Supported 00:14:39.492 Doorbell Buffer Config: Not Supported 00:14:39.492 Get LBA Status Capability: Not Supported 00:14:39.492 Command & Feature Lockdown Capability: Not Supported 00:14:39.492 Abort Command Limit: 4 00:14:39.492 Async Event Request Limit: 4 00:14:39.492 Number of Firmware Slots: N/A 00:14:39.492 Firmware Slot 1 Read-Only: N/A 00:14:39.492 Firmware Activation Without Reset: N/A 00:14:39.492 Multiple Update Detection Support: N/A 00:14:39.492 Firmware Update Granularity: No Information Provided 00:14:39.492 Per-Namespace SMART Log: No 00:14:39.492 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.492 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:39.492 Command Effects Log Page: Supported 00:14:39.492 Get Log Page Extended Data: Supported 00:14:39.492 Telemetry Log Pages: Not Supported 00:14:39.492 Persistent Event Log Pages: Not Supported 00:14:39.492 Supported Log Pages Log Page: May Support 00:14:39.492 Commands Supported & Effects Log Page: Not Supported 00:14:39.492 Feature Identifiers & Effects Log Page:May Support 00:14:39.492 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.492 Data Area 4 for Telemetry Log: Not Supported 00:14:39.492 Error Log Page Entries Supported: 128 00:14:39.492 Keep Alive: Supported 00:14:39.492 Keep Alive Granularity: 10000 ms 00:14:39.492 00:14:39.492 NVM Command Set Attributes 00:14:39.492 ========================== 00:14:39.492 Submission Queue Entry Size 00:14:39.492 Max: 64 00:14:39.492 Min: 64 00:14:39.492 Completion Queue Entry Size 00:14:39.492 Max: 16 00:14:39.492 Min: 16 00:14:39.492 Number of Namespaces: 32 00:14:39.492 Compare Command: Supported 00:14:39.492 Write Uncorrectable Command: Not Supported 00:14:39.492 Dataset Management Command: Supported 00:14:39.492 Write Zeroes Command: Supported 00:14:39.492 Set Features Save Field: Not Supported 00:14:39.492 Reservations: Not Supported 00:14:39.492 Timestamp: Not Supported 00:14:39.492 Copy: Supported 00:14:39.492 Volatile Write Cache: Present 00:14:39.492 Atomic Write Unit (Normal): 1 00:14:39.492 Atomic Write Unit (PFail): 1 00:14:39.492 Atomic Compare & Write Unit: 1 00:14:39.492 Fused Compare & Write: Supported 00:14:39.492 Scatter-Gather List 00:14:39.492 SGL Command Set: Supported (Dword aligned) 00:14:39.492 SGL Keyed: Not Supported 00:14:39.492 SGL Bit Bucket Descriptor: Not Supported 00:14:39.492 SGL Metadata Pointer: Not Supported 00:14:39.492 Oversized SGL: Not Supported 00:14:39.492 SGL Metadata Address: Not Supported 00:14:39.492 SGL Offset: Not Supported 00:14:39.492 Transport SGL Data Block: Not Supported 00:14:39.492 Replay Protected Memory Block: Not Supported 00:14:39.492 00:14:39.492 Firmware Slot Information 00:14:39.492 ========================= 00:14:39.492 Active slot: 1 00:14:39.492 Slot 1 Firmware Revision: 24.05 00:14:39.492 00:14:39.492 00:14:39.492 Commands Supported and Effects 00:14:39.492 ============================== 00:14:39.492 Admin Commands 00:14:39.492 -------------- 00:14:39.492 Get Log Page (02h): Supported 00:14:39.492 Identify (06h): Supported 00:14:39.492 Abort (08h): Supported 00:14:39.492 Set Features (09h): Supported 00:14:39.492 Get Features (0Ah): Supported 00:14:39.492 Asynchronous Event Request (0Ch): Supported 00:14:39.492 Keep Alive (18h): Supported 00:14:39.492 I/O Commands 00:14:39.492 ------------ 00:14:39.492 Flush (00h): Supported LBA-Change 00:14:39.492 Write (01h): Supported LBA-Change 00:14:39.492 Read (02h): Supported 00:14:39.492 Compare (05h): Supported 00:14:39.492 Write Zeroes (08h): Supported LBA-Change 00:14:39.492 Dataset Management (09h): Supported LBA-Change 00:14:39.492 Copy (19h): Supported LBA-Change 00:14:39.492 Unknown (79h): Supported LBA-Change 00:14:39.492 Unknown (7Ah): Supported 00:14:39.492 00:14:39.492 Error Log 00:14:39.492 ========= 00:14:39.492 00:14:39.492 Arbitration 00:14:39.492 =========== 00:14:39.492 Arbitration Burst: 1 00:14:39.492 00:14:39.492 Power Management 00:14:39.492 ================ 00:14:39.492 Number of Power States: 1 00:14:39.492 Current Power State: Power State #0 00:14:39.492 Power State #0: 00:14:39.492 Max Power: 0.00 W 00:14:39.492 Non-Operational State: Operational 00:14:39.492 Entry Latency: Not Reported 00:14:39.492 Exit Latency: Not Reported 00:14:39.492 Relative Read Throughput: 0 00:14:39.492 Relative Read Latency: 0 00:14:39.492 Relative Write Throughput: 0 00:14:39.492 Relative Write Latency: 0 00:14:39.492 Idle Power: Not Reported 00:14:39.492 Active Power: Not Reported 00:14:39.492 Non-Operational Permissive Mode: Not Supported 00:14:39.492 00:14:39.492 Health Information 00:14:39.492 ================== 00:14:39.492 Critical Warnings: 00:14:39.492 Available Spare Space: OK 00:14:39.492 Temperature: OK 00:14:39.492 Device Reliability: OK 00:14:39.492 Read Only: No 00:14:39.492 Volatile Memory Backup: OK 00:14:39.492 Current Temperature: 0 Kelvin (-2[2024-05-15 01:43:03.288434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:39.492 [2024-05-15 01:43:03.296228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:39.492 [2024-05-15 01:43:03.296277] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:39.492 [2024-05-15 01:43:03.296295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.492 [2024-05-15 01:43:03.296306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.492 [2024-05-15 01:43:03.296316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.492 [2024-05-15 01:43:03.296327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.492 [2024-05-15 01:43:03.296408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:39.492 [2024-05-15 01:43:03.296432] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:39.492 [2024-05-15 01:43:03.297414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:39.492 [2024-05-15 01:43:03.297498] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:39.492 [2024-05-15 01:43:03.297519] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:39.492 [2024-05-15 01:43:03.298425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:39.492 [2024-05-15 01:43:03.298451] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:39.492 [2024-05-15 01:43:03.298562] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:39.492 [2024-05-15 01:43:03.299745] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.492 73 Celsius) 00:14:39.493 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:39.493 Available Spare: 0% 00:14:39.493 Available Spare Threshold: 0% 00:14:39.493 Life Percentage Used: 0% 00:14:39.493 Data Units Read: 0 00:14:39.493 Data Units Written: 0 00:14:39.493 Host Read Commands: 0 00:14:39.493 Host Write Commands: 0 00:14:39.493 Controller Busy Time: 0 minutes 00:14:39.493 Power Cycles: 0 00:14:39.493 Power On Hours: 0 hours 00:14:39.493 Unsafe Shutdowns: 0 00:14:39.493 Unrecoverable Media Errors: 0 00:14:39.493 Lifetime Error Log Entries: 0 00:14:39.493 Warning Temperature Time: 0 minutes 00:14:39.493 Critical Temperature Time: 0 minutes 00:14:39.493 00:14:39.493 Number of Queues 00:14:39.493 ================ 00:14:39.493 Number of I/O Submission Queues: 127 00:14:39.493 Number of I/O Completion Queues: 127 00:14:39.493 00:14:39.493 Active Namespaces 00:14:39.493 ================= 00:14:39.493 Namespace ID:1 00:14:39.493 Error Recovery Timeout: Unlimited 00:14:39.493 Command Set Identifier: NVM (00h) 00:14:39.493 Deallocate: Supported 00:14:39.493 Deallocated/Unwritten Error: Not Supported 00:14:39.493 Deallocated Read Value: Unknown 00:14:39.493 Deallocate in Write Zeroes: Not Supported 00:14:39.493 Deallocated Guard Field: 0xFFFF 00:14:39.493 Flush: Supported 00:14:39.493 Reservation: Supported 00:14:39.493 Namespace Sharing Capabilities: Multiple Controllers 00:14:39.493 Size (in LBAs): 131072 (0GiB) 00:14:39.493 Capacity (in LBAs): 131072 (0GiB) 00:14:39.493 Utilization (in LBAs): 131072 (0GiB) 00:14:39.493 NGUID: 667B11C34B70452EB00413EAED5941D9 00:14:39.493 UUID: 667b11c3-4b70-452e-b004-13eaed5941d9 00:14:39.493 Thin Provisioning: Not Supported 00:14:39.493 Per-NS Atomic Units: Yes 00:14:39.493 Atomic Boundary Size (Normal): 0 00:14:39.493 Atomic Boundary Size (PFail): 0 00:14:39.493 Atomic Boundary Offset: 0 00:14:39.493 Maximum Single Source Range Length: 65535 00:14:39.493 Maximum Copy Length: 65535 00:14:39.493 Maximum Source Range Count: 1 00:14:39.493 NGUID/EUI64 Never Reused: No 00:14:39.493 Namespace Write Protected: No 00:14:39.493 Number of LBA Formats: 1 00:14:39.493 Current LBA Format: LBA Format #00 00:14:39.493 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:39.493 00:14:39.493 01:43:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:39.493 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.750 [2024-05-15 01:43:03.529107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.009 Initializing NVMe Controllers 00:14:45.009 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:45.009 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:45.009 Initialization complete. Launching workers. 00:14:45.009 ======================================================== 00:14:45.009 Latency(us) 00:14:45.009 Device Information : IOPS MiB/s Average min max 00:14:45.009 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34510.31 134.81 3708.20 1173.86 8690.88 00:14:45.009 ======================================================== 00:14:45.009 Total : 34510.31 134.81 3708.20 1173.86 8690.88 00:14:45.009 00:14:45.009 [2024-05-15 01:43:08.635604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.009 01:43:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:45.009 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.009 [2024-05-15 01:43:08.866261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.270 Initializing NVMe Controllers 00:14:50.270 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.270 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:50.270 Initialization complete. Launching workers. 00:14:50.270 ======================================================== 00:14:50.270 Latency(us) 00:14:50.270 Device Information : IOPS MiB/s Average min max 00:14:50.270 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31717.36 123.90 4035.39 1195.40 9857.89 00:14:50.270 ======================================================== 00:14:50.270 Total : 31717.36 123.90 4035.39 1195.40 9857.89 00:14:50.270 00:14:50.270 [2024-05-15 01:43:13.891593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.270 01:43:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:50.270 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.270 [2024-05-15 01:43:14.115455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.528 [2024-05-15 01:43:19.245378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.528 Initializing NVMe Controllers 00:14:55.528 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.528 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:55.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:55.528 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:55.528 Initialization complete. Launching workers. 00:14:55.528 Starting thread on core 2 00:14:55.528 Starting thread on core 3 00:14:55.528 Starting thread on core 1 00:14:55.528 01:43:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:55.528 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.784 [2024-05-15 01:43:19.564870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.111 [2024-05-15 01:43:22.636274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.111 Initializing NVMe Controllers 00:14:59.111 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.111 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:59.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:59.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:59.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:59.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:59.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:59.111 Initialization complete. Launching workers. 00:14:59.111 Starting thread on core 1 with urgent priority queue 00:14:59.111 Starting thread on core 2 with urgent priority queue 00:14:59.111 Starting thread on core 3 with urgent priority queue 00:14:59.111 Starting thread on core 0 with urgent priority queue 00:14:59.111 SPDK bdev Controller (SPDK2 ) core 0: 5650.33 IO/s 17.70 secs/100000 ios 00:14:59.111 SPDK bdev Controller (SPDK2 ) core 1: 5462.33 IO/s 18.31 secs/100000 ios 00:14:59.111 SPDK bdev Controller (SPDK2 ) core 2: 5729.33 IO/s 17.45 secs/100000 ios 00:14:59.111 SPDK bdev Controller (SPDK2 ) core 3: 5285.67 IO/s 18.92 secs/100000 ios 00:14:59.111 ======================================================== 00:14:59.111 00:14:59.111 01:43:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:59.111 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.111 [2024-05-15 01:43:22.938718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.111 Initializing NVMe Controllers 00:14:59.111 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.111 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.111 Namespace ID: 1 size: 0GB 00:14:59.111 Initialization complete. 00:14:59.111 INFO: using host memory buffer for IO 00:14:59.111 Hello world! 00:14:59.111 [2024-05-15 01:43:22.947788] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.111 01:43:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:59.368 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.368 [2024-05-15 01:43:23.258051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.741 Initializing NVMe Controllers 00:15:00.741 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:00.741 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:00.741 Initialization complete. Launching workers. 00:15:00.741 submit (in ns) avg, min, max = 5987.6, 3520.0, 4017076.7 00:15:00.741 complete (in ns) avg, min, max = 28061.4, 2094.4, 4017951.1 00:15:00.741 00:15:00.741 Submit histogram 00:15:00.742 ================ 00:15:00.742 Range in us Cumulative Count 00:15:00.742 3.508 - 3.532: 0.2075% ( 28) 00:15:00.742 3.532 - 3.556: 1.6455% ( 194) 00:15:00.742 3.556 - 3.579: 5.1664% ( 475) 00:15:00.742 3.579 - 3.603: 10.9258% ( 777) 00:15:00.742 3.603 - 3.627: 18.4197% ( 1011) 00:15:00.742 3.627 - 3.650: 26.4473% ( 1083) 00:15:00.742 3.650 - 3.674: 34.3192% ( 1062) 00:15:00.742 3.674 - 3.698: 41.6648% ( 991) 00:15:00.742 3.698 - 3.721: 48.9141% ( 978) 00:15:00.742 3.721 - 3.745: 55.0886% ( 833) 00:15:00.742 3.745 - 3.769: 60.0326% ( 667) 00:15:00.742 3.769 - 3.793: 64.2873% ( 574) 00:15:00.742 3.793 - 3.816: 67.5784% ( 444) 00:15:00.742 3.816 - 3.840: 71.3142% ( 504) 00:15:00.742 3.840 - 3.864: 74.9981% ( 497) 00:15:00.742 3.864 - 3.887: 78.7859% ( 511) 00:15:00.742 3.887 - 3.911: 81.9806% ( 431) 00:15:00.742 3.911 - 3.935: 84.8343% ( 385) 00:15:00.742 3.935 - 3.959: 87.1396% ( 311) 00:15:00.742 3.959 - 3.982: 89.2002% ( 278) 00:15:00.742 3.982 - 4.006: 90.5641% ( 184) 00:15:00.742 4.006 - 4.030: 91.8612% ( 175) 00:15:00.742 4.030 - 4.053: 92.9138% ( 142) 00:15:00.742 4.053 - 4.077: 93.7440% ( 112) 00:15:00.742 4.077 - 4.101: 94.5667% ( 111) 00:15:00.742 4.101 - 4.124: 95.0708% ( 68) 00:15:00.742 4.124 - 4.148: 95.5600% ( 66) 00:15:00.742 4.148 - 4.172: 96.0418% ( 65) 00:15:00.742 4.172 - 4.196: 96.3383% ( 40) 00:15:00.742 4.196 - 4.219: 96.5977% ( 35) 00:15:00.742 4.219 - 4.243: 96.8794% ( 38) 00:15:00.742 4.243 - 4.267: 97.0202% ( 19) 00:15:00.742 4.267 - 4.290: 97.1240% ( 14) 00:15:00.742 4.290 - 4.314: 97.2500% ( 17) 00:15:00.742 4.314 - 4.338: 97.3538% ( 14) 00:15:00.742 4.338 - 4.361: 97.4279% ( 10) 00:15:00.742 4.361 - 4.385: 97.5169% ( 12) 00:15:00.742 4.385 - 4.409: 97.5836% ( 9) 00:15:00.742 4.409 - 4.433: 97.6206% ( 5) 00:15:00.742 4.433 - 4.456: 97.6873% ( 9) 00:15:00.742 4.456 - 4.480: 97.7244% ( 5) 00:15:00.742 4.480 - 4.504: 97.7318% ( 1) 00:15:00.742 4.527 - 4.551: 97.7689% ( 5) 00:15:00.742 4.551 - 4.575: 97.7911% ( 3) 00:15:00.742 4.575 - 4.599: 97.7985% ( 1) 00:15:00.742 4.599 - 4.622: 97.8134% ( 2) 00:15:00.742 4.646 - 4.670: 97.8208% ( 1) 00:15:00.742 4.670 - 4.693: 97.8282% ( 1) 00:15:00.742 4.693 - 4.717: 97.8504% ( 3) 00:15:00.742 4.764 - 4.788: 97.8652% ( 2) 00:15:00.742 4.788 - 4.812: 97.8727% ( 1) 00:15:00.742 4.812 - 4.836: 97.8801% ( 1) 00:15:00.742 4.836 - 4.859: 97.8875% ( 1) 00:15:00.742 4.859 - 4.883: 97.8949% ( 1) 00:15:00.742 4.907 - 4.930: 97.9320% ( 5) 00:15:00.742 4.930 - 4.954: 97.9394% ( 1) 00:15:00.742 4.954 - 4.978: 97.9616% ( 3) 00:15:00.742 4.978 - 5.001: 97.9987% ( 5) 00:15:00.742 5.001 - 5.025: 98.0283% ( 4) 00:15:00.742 5.025 - 5.049: 98.0802% ( 7) 00:15:00.742 5.049 - 5.073: 98.1617% ( 11) 00:15:00.742 5.073 - 5.096: 98.2284% ( 9) 00:15:00.742 5.096 - 5.120: 98.2655% ( 5) 00:15:00.742 5.120 - 5.144: 98.3026% ( 5) 00:15:00.742 5.144 - 5.167: 98.3619% ( 8) 00:15:00.742 5.167 - 5.191: 98.4212% ( 8) 00:15:00.742 5.191 - 5.215: 98.4508% ( 4) 00:15:00.742 5.215 - 5.239: 98.4731% ( 3) 00:15:00.742 5.239 - 5.262: 98.5101% ( 5) 00:15:00.742 5.262 - 5.286: 98.5694% ( 8) 00:15:00.742 5.286 - 5.310: 98.5842% ( 2) 00:15:00.742 5.310 - 5.333: 98.6213% ( 5) 00:15:00.742 5.333 - 5.357: 98.6361% ( 2) 00:15:00.742 5.357 - 5.381: 98.6806% ( 6) 00:15:00.742 5.428 - 5.452: 98.6954% ( 2) 00:15:00.742 5.452 - 5.476: 98.7028% ( 1) 00:15:00.742 5.476 - 5.499: 98.7103% ( 1) 00:15:00.742 5.499 - 5.523: 98.7177% ( 1) 00:15:00.742 5.523 - 5.547: 98.7325% ( 2) 00:15:00.742 5.547 - 5.570: 98.7473% ( 2) 00:15:00.742 5.570 - 5.594: 98.7547% ( 1) 00:15:00.742 5.618 - 5.641: 98.7696% ( 2) 00:15:00.742 5.641 - 5.665: 98.7770% ( 1) 00:15:00.742 5.902 - 5.926: 98.7844% ( 1) 00:15:00.742 6.116 - 6.163: 98.7918% ( 1) 00:15:00.742 6.495 - 6.542: 98.7992% ( 1) 00:15:00.742 6.637 - 6.684: 98.8066% ( 1) 00:15:00.742 6.732 - 6.779: 98.8140% ( 1) 00:15:00.742 6.779 - 6.827: 98.8214% ( 1) 00:15:00.742 6.827 - 6.874: 98.8288% ( 1) 00:15:00.742 6.969 - 7.016: 98.8363% ( 1) 00:15:00.742 7.016 - 7.064: 98.8511% ( 2) 00:15:00.742 7.111 - 7.159: 98.8585% ( 1) 00:15:00.742 7.585 - 7.633: 98.8659% ( 1) 00:15:00.742 8.059 - 8.107: 98.8733% ( 1) 00:15:00.742 8.107 - 8.154: 98.8807% ( 1) 00:15:00.742 8.296 - 8.344: 98.8881% ( 1) 00:15:00.742 8.344 - 8.391: 98.8956% ( 1) 00:15:00.742 8.439 - 8.486: 98.9030% ( 1) 00:15:00.742 8.486 - 8.533: 98.9104% ( 1) 00:15:00.742 8.581 - 8.628: 98.9178% ( 1) 00:15:00.742 8.770 - 8.818: 98.9252% ( 1) 00:15:00.742 8.913 - 8.960: 98.9326% ( 1) 00:15:00.742 9.055 - 9.102: 98.9400% ( 1) 00:15:00.742 9.150 - 9.197: 98.9474% ( 1) 00:15:00.742 9.197 - 9.244: 98.9549% ( 1) 00:15:00.742 9.244 - 9.292: 98.9623% ( 1) 00:15:00.742 9.387 - 9.434: 98.9697% ( 1) 00:15:00.742 9.434 - 9.481: 98.9771% ( 1) 00:15:00.742 9.576 - 9.624: 98.9919% ( 2) 00:15:00.742 9.624 - 9.671: 98.9993% ( 1) 00:15:00.742 9.719 - 9.766: 99.0067% ( 1) 00:15:00.742 10.145 - 10.193: 99.0216% ( 2) 00:15:00.742 10.382 - 10.430: 99.0290% ( 1) 00:15:00.742 10.477 - 10.524: 99.0364% ( 1) 00:15:00.742 10.524 - 10.572: 99.0438% ( 1) 00:15:00.742 10.572 - 10.619: 99.0512% ( 1) 00:15:00.742 10.714 - 10.761: 99.0660% ( 2) 00:15:00.742 10.809 - 10.856: 99.0735% ( 1) 00:15:00.742 10.856 - 10.904: 99.0809% ( 1) 00:15:00.742 10.951 - 10.999: 99.0883% ( 1) 00:15:00.742 10.999 - 11.046: 99.0957% ( 1) 00:15:00.742 11.283 - 11.330: 99.1031% ( 1) 00:15:00.742 11.473 - 11.520: 99.1105% ( 1) 00:15:00.742 11.615 - 11.662: 99.1179% ( 1) 00:15:00.742 11.947 - 11.994: 99.1328% ( 2) 00:15:00.742 12.231 - 12.326: 99.1476% ( 2) 00:15:00.742 12.421 - 12.516: 99.1550% ( 1) 00:15:00.742 12.516 - 12.610: 99.1624% ( 1) 00:15:00.742 12.800 - 12.895: 99.1698% ( 1) 00:15:00.742 12.990 - 13.084: 99.1846% ( 2) 00:15:00.742 14.317 - 14.412: 99.1921% ( 1) 00:15:00.742 14.601 - 14.696: 99.1995% ( 1) 00:15:00.742 15.360 - 15.455: 99.2069% ( 1) 00:15:00.742 17.161 - 17.256: 99.2291% ( 3) 00:15:00.742 17.256 - 17.351: 99.2439% ( 2) 00:15:00.742 17.351 - 17.446: 99.2588% ( 2) 00:15:00.742 17.446 - 17.541: 99.2662% ( 1) 00:15:00.742 17.541 - 17.636: 99.2958% ( 4) 00:15:00.742 17.636 - 17.730: 99.3625% ( 9) 00:15:00.742 17.730 - 17.825: 99.3774% ( 2) 00:15:00.742 17.825 - 17.920: 99.3996% ( 3) 00:15:00.742 17.920 - 18.015: 99.4737% ( 10) 00:15:00.742 18.015 - 18.110: 99.5330% ( 8) 00:15:00.742 18.110 - 18.204: 99.5849% ( 7) 00:15:00.742 18.204 - 18.299: 99.5997% ( 2) 00:15:00.742 18.299 - 18.394: 99.6442% ( 6) 00:15:00.742 18.394 - 18.489: 99.7332% ( 12) 00:15:00.742 18.489 - 18.584: 99.7776% ( 6) 00:15:00.742 18.584 - 18.679: 99.8073% ( 4) 00:15:00.742 18.679 - 18.773: 99.8221% ( 2) 00:15:00.743 18.773 - 18.868: 99.8592% ( 5) 00:15:00.743 18.868 - 18.963: 99.8740% ( 2) 00:15:00.743 18.963 - 19.058: 99.8814% ( 1) 00:15:00.743 19.247 - 19.342: 99.8888% ( 1) 00:15:00.743 19.342 - 19.437: 99.8962% ( 1) 00:15:00.743 19.627 - 19.721: 99.9036% ( 1) 00:15:00.743 19.816 - 19.911: 99.9111% ( 1) 00:15:00.743 20.385 - 20.480: 99.9185% ( 1) 00:15:00.743 22.376 - 22.471: 99.9259% ( 1) 00:15:00.743 22.945 - 23.040: 99.9333% ( 1) 00:15:00.743 24.273 - 24.462: 99.9407% ( 1) 00:15:00.743 24.462 - 24.652: 99.9481% ( 1) 00:15:00.743 3980.705 - 4004.978: 99.9704% ( 3) 00:15:00.743 4004.978 - 4029.250: 100.0000% ( 4) 00:15:00.743 00:15:00.743 Complete histogram 00:15:00.743 ================== 00:15:00.743 Range in us Cumulative Count 00:15:00.743 2.086 - 2.098: 0.1408% ( 19) 00:15:00.743 2.098 - 2.110: 14.3132% ( 1912) 00:15:00.743 2.110 - 2.121: 24.2977% ( 1347) 00:15:00.743 2.121 - 2.133: 29.0786% ( 645) 00:15:00.743 2.133 - 2.145: 52.6573% ( 3181) 00:15:00.743 2.145 - 2.157: 57.7422% ( 686) 00:15:00.743 2.157 - 2.169: 59.8399% ( 283) 00:15:00.743 2.169 - 2.181: 67.1411% ( 985) 00:15:00.743 2.181 - 2.193: 70.4396% ( 445) 00:15:00.743 2.193 - 2.204: 73.7677% ( 449) 00:15:00.743 2.204 - 2.216: 82.6625% ( 1200) 00:15:00.743 2.216 - 2.228: 85.3828% ( 367) 00:15:00.743 2.228 - 2.240: 86.2872% ( 122) 00:15:00.743 2.240 - 2.252: 88.8074% ( 340) 00:15:00.743 2.252 - 2.264: 90.0897% ( 173) 00:15:00.743 2.264 - 2.276: 90.9421% ( 115) 00:15:00.743 2.276 - 2.287: 93.3363% ( 323) 00:15:00.743 2.287 - 2.299: 94.3147% ( 132) 00:15:00.743 2.299 - 2.311: 94.6705% ( 48) 00:15:00.743 2.311 - 2.323: 95.1375% ( 63) 00:15:00.743 2.323 - 2.335: 95.3154% ( 24) 00:15:00.743 2.335 - 2.347: 95.4933% ( 24) 00:15:00.743 2.347 - 2.359: 95.6860% ( 26) 00:15:00.743 2.359 - 2.370: 95.8417% ( 21) 00:15:00.743 2.370 - 2.382: 95.9306% ( 12) 00:15:00.743 2.382 - 2.394: 96.1011% ( 23) 00:15:00.743 2.394 - 2.406: 96.2790% ( 24) 00:15:00.743 2.406 - 2.418: 96.5236% ( 33) 00:15:00.743 2.418 - 2.430: 96.8053% ( 38) 00:15:00.743 2.430 - 2.441: 97.1092% ( 41) 00:15:00.743 2.441 - 2.453: 97.4057% ( 40) 00:15:00.743 2.453 - 2.465: 97.6429% ( 32) 00:15:00.743 2.465 - 2.477: 97.7985% ( 21) 00:15:00.743 2.477 - 2.489: 97.9245% ( 17) 00:15:00.743 2.489 - 2.501: 98.0950% ( 23) 00:15:00.743 2.501 - 2.513: 98.1543% ( 8) 00:15:00.743 2.513 - 2.524: 98.2507% ( 13) 00:15:00.743 2.524 - 2.536: 98.3026% ( 7) 00:15:00.743 2.536 - 2.548: 98.3396% ( 5) 00:15:00.743 2.548 - 2.560: 98.3767% ( 5) 00:15:00.743 2.560 - 2.572: 98.3989% ( 3) 00:15:00.743 2.572 - 2.584: 98.4360% ( 5) 00:15:00.743 2.584 - 2.596: 98.4582% ( 3) 00:15:00.743 2.607 - 2.619: 98.4656% ( 1) 00:15:00.743 2.619 - 2.631: 98.4805% ( 2) 00:15:00.743 2.655 - 2.667: 98.4953% ( 2) 00:15:00.743 2.667 - 2.679: 98.5027% ( 1) 00:15:00.743 2.679 - 2.690: 98.5101% ( 1) 00:15:00.743 2.726 - 2.738: 98.5175% ( 1) 00:15:00.743 2.738 - 2.750: 98.5324% ( 2) 00:15:00.743 2.785 - 2.797: 98.5398% ( 1) 00:15:00.743 2.821 - 2.833: 98.5472% ( 1) 00:15:00.743 2.892 - 2.904: 98.5546% ( 1) 00:15:00.743 2.916 - 2.927: 98.5620% ( 1) 00:15:00.743 3.319 - 3.342: 98.5694% ( 1) 00:15:00.743 3.366 - 3.390: 98.5842% ( 2) 00:15:00.743 3.437 - 3.461: 98.5917% ( 1) 00:15:00.743 3.484 - 3.508: 98.6065% ( 2) 00:15:00.743 3.508 - 3.532: 98.6213% ( 2) 00:15:00.743 3.579 - 3.603: 98.6287% ( 1) 00:15:00.743 3.603 - 3.627: 98.6361% ( 1) 00:15:00.743 3.627 - 3.650: 98.6510% ( 2) 00:15:00.743 3.650 - 3.674: 98.6732% ( 3) 00:15:00.743 3.721 - 3.745: 98.6806% ( 1) 00:15:00.743 3.793 - 3.816: 98.6880% ( 1) 00:15:00.743 3.816 - 3.840: 98.6954% ( 1) 00:15:00.743 3.840 - 3.864: 98.7028% ( 1) 00:15:00.743 3.887 - 3.911: 98.7103% ( 1) 00:15:00.743 3.982 - 4.006: 98.7177% ( 1) 00:15:00.743 4.053 - 4.077: 98.7251% ( 1) 00:15:00.743 4.267 - 4.290: 98.7325% ( 1) 00:15:00.743 5.333 - 5.357: 98.7399% ( 1) 00:15:00.743 5.499 - 5.523: 98.7473% ( 1) 00:15:00.743 5.665 - 5.689: 98.7547% ( 1) 00:15:00.743 6.044 - 6.068: 98.7621% ( 1) 00:15:00.743 6.116 - 6.163: 98.7696% ( 1) 00:15:00.743 6.542 - 6.590: 98.7770% ( 1) 00:15:00.743 6.637 - 6.684: 98.7844% ( 1) 00:15:00.743 6.684 - 6.732: 98.7918% ( 1) 00:15:00.743 7.016 - 7.064: 98.7992% ( 1) 00:15:00.743 7.159 - 7.206: 98.8288% ( 4) 00:15:00.743 7.206 - 7.253: 98.8363% ( 1) 00:15:00.743 7.396 - 7.443: 98.8437% ( 1) 00:15:00.743 8.201 - 8.249: 98.8511% ( 1) 00:15:00.743 8.296 - 8.344: 98.8585% ( 1) 00:15:00.743 9.624 - 9.671: 98.8659% ( 1) 00:15:00.743 9.671 - 9.719: 9[2024-05-15 01:43:24.352901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:00.743 8.8733% ( 1) 00:15:00.743 12.516 - 12.610: 98.8807% ( 1) 00:15:00.743 15.455 - 15.550: 98.8881% ( 1) 00:15:00.743 15.739 - 15.834: 98.8956% ( 1) 00:15:00.743 15.834 - 15.929: 98.9030% ( 1) 00:15:00.743 15.929 - 16.024: 98.9474% ( 6) 00:15:00.743 16.024 - 16.119: 98.9623% ( 2) 00:15:00.743 16.119 - 16.213: 98.9993% ( 5) 00:15:00.743 16.213 - 16.308: 99.0290% ( 4) 00:15:00.743 16.308 - 16.403: 99.0735% ( 6) 00:15:00.743 16.403 - 16.498: 99.0883% ( 2) 00:15:00.743 16.498 - 16.593: 99.0957% ( 1) 00:15:00.743 16.593 - 16.687: 99.1031% ( 1) 00:15:00.743 16.687 - 16.782: 99.1328% ( 4) 00:15:00.743 16.782 - 16.877: 99.1772% ( 6) 00:15:00.743 16.877 - 16.972: 99.1921% ( 2) 00:15:00.743 16.972 - 17.067: 99.2143% ( 3) 00:15:00.743 17.067 - 17.161: 99.2439% ( 4) 00:15:00.743 17.161 - 17.256: 99.2662% ( 3) 00:15:00.743 17.256 - 17.351: 99.2810% ( 2) 00:15:00.743 17.446 - 17.541: 99.2884% ( 1) 00:15:00.743 17.636 - 17.730: 99.3181% ( 4) 00:15:00.743 17.730 - 17.825: 99.3255% ( 1) 00:15:00.743 18.299 - 18.394: 99.3329% ( 1) 00:15:00.743 18.394 - 18.489: 99.3403% ( 1) 00:15:00.743 21.618 - 21.713: 99.3477% ( 1) 00:15:00.743 25.410 - 25.600: 99.3551% ( 1) 00:15:00.743 3980.705 - 4004.978: 99.8073% ( 61) 00:15:00.743 4004.978 - 4029.250: 100.0000% ( 26) 00:15:00.743 00:15:00.743 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:00.743 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:00.743 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:00.743 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:00.743 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:00.743 [ 00:15:00.743 { 00:15:00.743 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:00.743 "subtype": "Discovery", 00:15:00.743 "listen_addresses": [], 00:15:00.743 "allow_any_host": true, 00:15:00.743 "hosts": [] 00:15:00.743 }, 00:15:00.743 { 00:15:00.743 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:00.743 "subtype": "NVMe", 00:15:00.743 "listen_addresses": [ 00:15:00.743 { 00:15:00.743 "trtype": "VFIOUSER", 00:15:00.743 "adrfam": "IPv4", 00:15:00.743 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:00.743 "trsvcid": "0" 00:15:00.743 } 00:15:00.743 ], 00:15:00.743 "allow_any_host": true, 00:15:00.743 "hosts": [], 00:15:00.743 "serial_number": "SPDK1", 00:15:00.743 "model_number": "SPDK bdev Controller", 00:15:00.743 "max_namespaces": 32, 00:15:00.743 "min_cntlid": 1, 00:15:00.743 "max_cntlid": 65519, 00:15:00.743 "namespaces": [ 00:15:00.743 { 00:15:00.743 "nsid": 1, 00:15:00.743 "bdev_name": "Malloc1", 00:15:00.743 "name": "Malloc1", 00:15:00.743 "nguid": "E051EAEA063B491D87A058CDD827EE97", 00:15:00.743 "uuid": "e051eaea-063b-491d-87a0-58cdd827ee97" 00:15:00.743 }, 00:15:00.743 { 00:15:00.744 "nsid": 2, 00:15:00.744 "bdev_name": "Malloc3", 00:15:00.744 "name": "Malloc3", 00:15:00.744 "nguid": "A023FBC89B11418793F023B8A397BC09", 00:15:00.744 "uuid": "a023fbc8-9b11-4187-93f0-23b8a397bc09" 00:15:00.744 } 00:15:00.744 ] 00:15:00.744 }, 00:15:00.744 { 00:15:00.744 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:00.744 "subtype": "NVMe", 00:15:00.744 "listen_addresses": [ 00:15:00.744 { 00:15:00.744 "trtype": "VFIOUSER", 00:15:00.744 "adrfam": "IPv4", 00:15:00.744 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:00.744 "trsvcid": "0" 00:15:00.744 } 00:15:00.744 ], 00:15:00.744 "allow_any_host": true, 00:15:00.744 "hosts": [], 00:15:00.744 "serial_number": "SPDK2", 00:15:00.744 "model_number": "SPDK bdev Controller", 00:15:00.744 "max_namespaces": 32, 00:15:00.744 "min_cntlid": 1, 00:15:00.744 "max_cntlid": 65519, 00:15:00.744 "namespaces": [ 00:15:00.744 { 00:15:00.744 "nsid": 1, 00:15:00.744 "bdev_name": "Malloc2", 00:15:00.744 "name": "Malloc2", 00:15:00.744 "nguid": "667B11C34B70452EB00413EAED5941D9", 00:15:00.744 "uuid": "667b11c3-4b70-452e-b004-13eaed5941d9" 00:15:00.744 } 00:15:00.744 ] 00:15:00.744 } 00:15:00.744 ] 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4015154 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:00.744 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:01.001 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.001 [2024-05-15 01:43:24.824760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.001 Malloc4 00:15:01.001 01:43:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:01.259 [2024-05-15 01:43:25.153236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.259 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.517 Asynchronous Event Request test 00:15:01.517 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.517 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.517 Registering asynchronous event callbacks... 00:15:01.517 Starting namespace attribute notice tests for all controllers... 00:15:01.517 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:01.517 aer_cb - Changed Namespace 00:15:01.517 Cleaning up... 00:15:01.517 [ 00:15:01.517 { 00:15:01.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.517 "subtype": "Discovery", 00:15:01.517 "listen_addresses": [], 00:15:01.517 "allow_any_host": true, 00:15:01.517 "hosts": [] 00:15:01.517 }, 00:15:01.517 { 00:15:01.517 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.517 "subtype": "NVMe", 00:15:01.517 "listen_addresses": [ 00:15:01.517 { 00:15:01.517 "trtype": "VFIOUSER", 00:15:01.517 "adrfam": "IPv4", 00:15:01.517 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.517 "trsvcid": "0" 00:15:01.517 } 00:15:01.517 ], 00:15:01.517 "allow_any_host": true, 00:15:01.517 "hosts": [], 00:15:01.517 "serial_number": "SPDK1", 00:15:01.517 "model_number": "SPDK bdev Controller", 00:15:01.517 "max_namespaces": 32, 00:15:01.518 "min_cntlid": 1, 00:15:01.518 "max_cntlid": 65519, 00:15:01.518 "namespaces": [ 00:15:01.518 { 00:15:01.518 "nsid": 1, 00:15:01.518 "bdev_name": "Malloc1", 00:15:01.518 "name": "Malloc1", 00:15:01.518 "nguid": "E051EAEA063B491D87A058CDD827EE97", 00:15:01.518 "uuid": "e051eaea-063b-491d-87a0-58cdd827ee97" 00:15:01.518 }, 00:15:01.518 { 00:15:01.518 "nsid": 2, 00:15:01.518 "bdev_name": "Malloc3", 00:15:01.518 "name": "Malloc3", 00:15:01.518 "nguid": "A023FBC89B11418793F023B8A397BC09", 00:15:01.518 "uuid": "a023fbc8-9b11-4187-93f0-23b8a397bc09" 00:15:01.518 } 00:15:01.518 ] 00:15:01.518 }, 00:15:01.518 { 00:15:01.518 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.518 "subtype": "NVMe", 00:15:01.518 "listen_addresses": [ 00:15:01.518 { 00:15:01.518 "trtype": "VFIOUSER", 00:15:01.518 "adrfam": "IPv4", 00:15:01.518 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.518 "trsvcid": "0" 00:15:01.518 } 00:15:01.518 ], 00:15:01.518 "allow_any_host": true, 00:15:01.518 "hosts": [], 00:15:01.518 "serial_number": "SPDK2", 00:15:01.518 "model_number": "SPDK bdev Controller", 00:15:01.518 "max_namespaces": 32, 00:15:01.518 "min_cntlid": 1, 00:15:01.518 "max_cntlid": 65519, 00:15:01.518 "namespaces": [ 00:15:01.518 { 00:15:01.518 "nsid": 1, 00:15:01.518 "bdev_name": "Malloc2", 00:15:01.518 "name": "Malloc2", 00:15:01.518 "nguid": "667B11C34B70452EB00413EAED5941D9", 00:15:01.518 "uuid": "667b11c3-4b70-452e-b004-13eaed5941d9" 00:15:01.518 }, 00:15:01.518 { 00:15:01.518 "nsid": 2, 00:15:01.518 "bdev_name": "Malloc4", 00:15:01.518 "name": "Malloc4", 00:15:01.518 "nguid": "3E589B56824A4793AE3BA1BEC24F003A", 00:15:01.518 "uuid": "3e589b56-824a-4793-ae3b-a1bec24f003a" 00:15:01.518 } 00:15:01.518 ] 00:15:01.518 } 00:15:01.518 ] 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4015154 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4009551 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 4009551 ']' 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 4009551 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:01.518 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4009551 00:15:01.776 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:01.776 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:01.776 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4009551' 00:15:01.776 killing process with pid 4009551 00:15:01.776 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 4009551 00:15:01.776 [2024-05-15 01:43:25.467169] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:01.776 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 4009551 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4015297 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4015297' 00:15:02.034 Process pid: 4015297 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4015297 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 4015297 ']' 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:02.034 01:43:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:02.034 [2024-05-15 01:43:25.821914] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:02.034 [2024-05-15 01:43:25.822955] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:02.034 [2024-05-15 01:43:25.823016] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.034 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.034 [2024-05-15 01:43:25.895444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.290 [2024-05-15 01:43:25.985837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.290 [2024-05-15 01:43:25.985892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.290 [2024-05-15 01:43:25.985917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.290 [2024-05-15 01:43:25.985939] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.290 [2024-05-15 01:43:25.985958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.290 [2024-05-15 01:43:25.986028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.290 [2024-05-15 01:43:25.986085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.290 [2024-05-15 01:43:25.986197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.290 [2024-05-15 01:43:25.986212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.290 [2024-05-15 01:43:26.087477] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:02.290 [2024-05-15 01:43:26.087703] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:02.290 [2024-05-15 01:43:26.088022] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:02.290 [2024-05-15 01:43:26.088672] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:02.290 [2024-05-15 01:43:26.088930] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:02.290 01:43:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:02.290 01:43:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:15:02.290 01:43:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:03.222 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:03.479 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:03.479 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:03.479 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:03.479 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:03.479 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:03.738 Malloc1 00:15:03.738 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:03.997 01:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:04.254 01:43:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:04.511 [2024-05-15 01:43:28.342837] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:04.511 01:43:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.511 01:43:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:04.511 01:43:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:04.768 Malloc2 00:15:04.769 01:43:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:05.026 01:43:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:05.284 01:43:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4015297 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 4015297 ']' 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 4015297 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4015297 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4015297' 00:15:05.541 killing process with pid 4015297 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 4015297 00:15:05.541 [2024-05-15 01:43:29.390136] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:05.541 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 4015297 00:15:05.800 01:43:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:05.800 01:43:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:05.800 00:15:05.800 real 0m52.467s 00:15:05.800 user 3m27.503s 00:15:05.800 sys 0m4.293s 00:15:05.800 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:05.800 01:43:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:05.800 ************************************ 00:15:05.800 END TEST nvmf_vfio_user 00:15:05.800 ************************************ 00:15:05.800 01:43:29 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:05.800 01:43:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:05.800 01:43:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:05.800 01:43:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:05.800 ************************************ 00:15:05.800 START TEST nvmf_vfio_user_nvme_compliance 00:15:05.800 ************************************ 00:15:05.800 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:06.064 * Looking for test storage... 00:15:06.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:06.064 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.064 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:06.064 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4015823 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4015823' 00:15:06.065 Process pid: 4015823 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4015823 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # '[' -z 4015823 ']' 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:06.065 01:43:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:06.065 [2024-05-15 01:43:29.838539] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:06.065 [2024-05-15 01:43:29.838632] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.065 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.065 [2024-05-15 01:43:29.906526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.065 [2024-05-15 01:43:29.990320] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.065 [2024-05-15 01:43:29.990368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.323 [2024-05-15 01:43:29.990391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.323 [2024-05-15 01:43:29.990419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.323 [2024-05-15 01:43:29.990437] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.323 [2024-05-15 01:43:29.990516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.323 [2024-05-15 01:43:29.990566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.323 [2024-05-15 01:43:29.990575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.323 01:43:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:06.323 01:43:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # return 0 00:15:06.323 01:43:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:07.254 malloc0 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:07.254 [2024-05-15 01:43:31.181460] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:07.254 01:43:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:07.512 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.512 00:15:07.512 00:15:07.512 CUnit - A unit testing framework for C - Version 2.1-3 00:15:07.512 http://cunit.sourceforge.net/ 00:15:07.512 00:15:07.512 00:15:07.512 Suite: nvme_compliance 00:15:07.512 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 01:43:31.360766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.512 [2024-05-15 01:43:31.362260] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:07.512 [2024-05-15 01:43:31.362285] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:07.512 [2024-05-15 01:43:31.362298] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:07.512 [2024-05-15 01:43:31.363787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:07.512 passed 00:15:07.769 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 01:43:31.448361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.769 [2024-05-15 01:43:31.451381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:07.769 passed 00:15:07.769 Test: admin_identify_ns ...[2024-05-15 01:43:31.536893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:07.769 [2024-05-15 01:43:31.600247] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:07.769 [2024-05-15 01:43:31.608250] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:07.769 [2024-05-15 01:43:31.629360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:07.769 passed 00:15:08.027 Test: admin_get_features_mandatory_features ...[2024-05-15 01:43:31.711011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.027 [2024-05-15 01:43:31.714036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.027 passed 00:15:08.027 Test: admin_get_features_optional_features ...[2024-05-15 01:43:31.797606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.027 [2024-05-15 01:43:31.800624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.027 passed 00:15:08.027 Test: admin_set_features_number_of_queues ...[2024-05-15 01:43:31.885825] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.284 [2024-05-15 01:43:31.990316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.284 passed 00:15:08.285 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 01:43:32.074010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.285 [2024-05-15 01:43:32.077040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.285 passed 00:15:08.285 Test: admin_get_log_page_with_lpo ...[2024-05-15 01:43:32.160354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.543 [2024-05-15 01:43:32.226245] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:08.543 [2024-05-15 01:43:32.239325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.543 passed 00:15:08.543 Test: fabric_property_get ...[2024-05-15 01:43:32.324560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.543 [2024-05-15 01:43:32.325809] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:08.543 [2024-05-15 01:43:32.327599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.543 passed 00:15:08.543 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 01:43:32.410095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.543 [2024-05-15 01:43:32.411393] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:08.543 [2024-05-15 01:43:32.413117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.543 passed 00:15:08.801 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 01:43:32.499774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.801 [2024-05-15 01:43:32.583225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:08.801 [2024-05-15 01:43:32.599226] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:08.801 [2024-05-15 01:43:32.604332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.801 passed 00:15:08.801 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 01:43:32.687997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.801 [2024-05-15 01:43:32.689290] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:08.801 [2024-05-15 01:43:32.691022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.801 passed 00:15:09.059 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 01:43:32.772128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.059 [2024-05-15 01:43:32.847253] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:09.059 [2024-05-15 01:43:32.871225] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:09.059 [2024-05-15 01:43:32.876332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.059 passed 00:15:09.059 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 01:43:32.959665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.059 [2024-05-15 01:43:32.960951] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:09.059 [2024-05-15 01:43:32.961004] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:09.059 [2024-05-15 01:43:32.963692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.317 passed 00:15:09.317 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 01:43:33.049395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.317 [2024-05-15 01:43:33.141229] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:09.317 [2024-05-15 01:43:33.149228] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:09.317 [2024-05-15 01:43:33.157225] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:09.317 [2024-05-15 01:43:33.165228] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:09.317 [2024-05-15 01:43:33.194339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.317 passed 00:15:09.575 Test: admin_create_io_sq_verify_pc ...[2024-05-15 01:43:33.280530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.575 [2024-05-15 01:43:33.297237] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:09.575 [2024-05-15 01:43:33.314858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.575 passed 00:15:09.575 Test: admin_create_io_qp_max_qps ...[2024-05-15 01:43:33.396420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.946 [2024-05-15 01:43:34.500245] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:11.206 [2024-05-15 01:43:34.897220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.206 passed 00:15:11.206 Test: admin_create_io_sq_shared_cq ...[2024-05-15 01:43:34.978821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:11.207 [2024-05-15 01:43:35.114229] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:11.467 [2024-05-15 01:43:35.151313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:11.467 passed 00:15:11.467 00:15:11.467 Run Summary: Type Total Ran Passed Failed Inactive 00:15:11.467 suites 1 1 n/a 0 0 00:15:11.467 tests 18 18 18 0 0 00:15:11.467 asserts 360 360 360 0 n/a 00:15:11.467 00:15:11.467 Elapsed time = 1.571 seconds 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4015823 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' -z 4015823 ']' 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # kill -0 4015823 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # uname 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4015823 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4015823' 00:15:11.467 killing process with pid 4015823 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # kill 4015823 00:15:11.467 [2024-05-15 01:43:35.228947] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:11.467 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # wait 4015823 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:11.725 00:15:11.725 real 0m5.750s 00:15:11.725 user 0m16.185s 00:15:11.725 sys 0m0.559s 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:11.725 ************************************ 00:15:11.725 END TEST nvmf_vfio_user_nvme_compliance 00:15:11.725 ************************************ 00:15:11.725 01:43:35 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:11.725 01:43:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:11.725 01:43:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:11.725 01:43:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.725 ************************************ 00:15:11.725 START TEST nvmf_vfio_user_fuzz 00:15:11.725 ************************************ 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:11.725 * Looking for test storage... 00:15:11.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4016608 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4016608' 00:15:11.725 Process pid: 4016608 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4016608 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # '[' -z 4016608 ']' 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:11.725 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:11.983 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:11.983 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # return 0 00:15:11.983 01:43:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.353 malloc0 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:13.353 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:13.354 01:43:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:45.441 Fuzzing completed. Shutting down the fuzz application 00:15:45.441 00:15:45.441 Dumping successful admin opcodes: 00:15:45.441 8, 9, 10, 24, 00:15:45.441 Dumping successful io opcodes: 00:15:45.441 0, 00:15:45.441 NS: 0x200003a1ef00 I/O qp, Total commands completed: 550271, total successful commands: 2116, random_seed: 2405891072 00:15:45.441 NS: 0x200003a1ef00 admin qp, Total commands completed: 121832, total successful commands: 1002, random_seed: 1097455744 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4016608 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' -z 4016608 ']' 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # kill -0 4016608 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # uname 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4016608 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4016608' 00:15:45.441 killing process with pid 4016608 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # kill 4016608 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # wait 4016608 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:45.441 00:15:45.441 real 0m32.208s 00:15:45.441 user 0m30.768s 00:15:45.441 sys 0m28.145s 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:45.441 01:44:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.441 ************************************ 00:15:45.441 END TEST nvmf_vfio_user_fuzz 00:15:45.441 ************************************ 00:15:45.441 01:44:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:45.441 01:44:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:45.441 01:44:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:45.441 01:44:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.441 ************************************ 00:15:45.441 START TEST nvmf_host_management 00:15:45.441 ************************************ 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:45.441 * Looking for test storage... 00:15:45.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.441 01:44:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.442 01:44:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:46.379 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:46.379 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:46.379 Found net devices under 0000:09:00.0: cvl_0_0 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.379 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:46.379 Found net devices under 0000:09:00.1: cvl_0_1 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.638 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:46.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:15:46.639 00:15:46.639 --- 10.0.0.2 ping statistics --- 00:15:46.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.639 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:15:46.639 00:15:46.639 --- 10.0.0.1 ping statistics --- 00:15:46.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.639 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4022342 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4022342 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 4022342 ']' 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:46.639 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:46.639 [2024-05-15 01:44:10.515746] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:46.639 [2024-05-15 01:44:10.515836] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.639 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.898 [2024-05-15 01:44:10.594316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.898 [2024-05-15 01:44:10.685406] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.898 [2024-05-15 01:44:10.685463] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.898 [2024-05-15 01:44:10.685485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.898 [2024-05-15 01:44:10.685496] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.898 [2024-05-15 01:44:10.685522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.898 [2024-05-15 01:44:10.685622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.898 [2024-05-15 01:44:10.685741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.898 [2024-05-15 01:44:10.685774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:46.898 [2024-05-15 01:44:10.685777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:46.898 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:46.898 [2024-05-15 01:44:10.826744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.156 Malloc0 00:15:47.156 [2024-05-15 01:44:10.885018] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:47.156 [2024-05-15 01:44:10.885354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4022394 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4022394 /var/tmp/bdevperf.sock 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 4022394 ']' 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:47.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:47.156 { 00:15:47.156 "params": { 00:15:47.156 "name": "Nvme$subsystem", 00:15:47.156 "trtype": "$TEST_TRANSPORT", 00:15:47.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.156 "adrfam": "ipv4", 00:15:47.156 "trsvcid": "$NVMF_PORT", 00:15:47.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.156 "hdgst": ${hdgst:-false}, 00:15:47.156 "ddgst": ${ddgst:-false} 00:15:47.156 }, 00:15:47.156 "method": "bdev_nvme_attach_controller" 00:15:47.156 } 00:15:47.156 EOF 00:15:47.156 )") 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:47.156 01:44:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:47.156 "params": { 00:15:47.156 "name": "Nvme0", 00:15:47.156 "trtype": "tcp", 00:15:47.156 "traddr": "10.0.0.2", 00:15:47.156 "adrfam": "ipv4", 00:15:47.156 "trsvcid": "4420", 00:15:47.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:47.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:47.156 "hdgst": false, 00:15:47.156 "ddgst": false 00:15:47.156 }, 00:15:47.156 "method": "bdev_nvme_attach_controller" 00:15:47.156 }' 00:15:47.156 [2024-05-15 01:44:10.953290] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:47.156 [2024-05-15 01:44:10.953370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022394 ] 00:15:47.156 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.156 [2024-05-15 01:44:11.025522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.414 [2024-05-15 01:44:11.109069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.414 Running I/O for 10 seconds... 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:15:47.672 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.931 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=530 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 530 -ge 100 ']' 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.932 [2024-05-15 01:44:11.716506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.932 [2024-05-15 01:44:11.716590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.716608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.932 [2024-05-15 01:44:11.716623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.716637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.932 [2024-05-15 01:44:11.716654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.716669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.932 [2024-05-15 01:44:11.716682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.716696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d1dc0 is same with the state(5) to be set 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:47.932 [2024-05-15 01:44:11.727541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d1dc0 (9): Bad file descriptor 00:15:47.932 [2024-05-15 01:44:11.727634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.727976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.727990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.932 [2024-05-15 01:44:11.728332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 01:44:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:47.932 [2024-05-15 01:44:11.728463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.932 [2024-05-15 01:44:11.728479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.932 [2024-05-15 01:44:11.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.728979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.728993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.933 [2024-05-15 01:44:11.729656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.933 [2024-05-15 01:44:11.729670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.934 [2024-05-15 01:44:11.729684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.934 [2024-05-15 01:44:11.729763] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17cc1f0 was disconnected and freed. reset controller. 00:15:47.934 [2024-05-15 01:44:11.730883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:47.934 task offset: 81920 on job bdev=Nvme0n1 fails 00:15:47.934 00:15:47.934 Latency(us) 00:15:47.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.934 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:47.934 Job: Nvme0n1 ended in about 0.40 seconds with error 00:15:47.934 Verification LBA range: start 0x0 length 0x400 00:15:47.934 Nvme0n1 : 0.40 1596.80 99.80 159.68 0.00 35382.77 2560.76 33593.27 00:15:47.934 =================================================================================================================== 00:15:47.934 Total : 1596.80 99.80 159.68 0.00 35382.77 2560.76 33593.27 00:15:47.934 [2024-05-15 01:44:11.732755] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:47.934 [2024-05-15 01:44:11.740263] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4022394 00:15:48.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4022394) - No such process 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:48.867 { 00:15:48.867 "params": { 00:15:48.867 "name": "Nvme$subsystem", 00:15:48.867 "trtype": "$TEST_TRANSPORT", 00:15:48.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.867 "adrfam": "ipv4", 00:15:48.867 "trsvcid": "$NVMF_PORT", 00:15:48.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.867 "hdgst": ${hdgst:-false}, 00:15:48.867 "ddgst": ${ddgst:-false} 00:15:48.867 }, 00:15:48.867 "method": "bdev_nvme_attach_controller" 00:15:48.867 } 00:15:48.867 EOF 00:15:48.867 )") 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:48.867 01:44:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:48.867 "params": { 00:15:48.867 "name": "Nvme0", 00:15:48.867 "trtype": "tcp", 00:15:48.867 "traddr": "10.0.0.2", 00:15:48.867 "adrfam": "ipv4", 00:15:48.867 "trsvcid": "4420", 00:15:48.867 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:48.867 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:48.867 "hdgst": false, 00:15:48.867 "ddgst": false 00:15:48.867 }, 00:15:48.867 "method": "bdev_nvme_attach_controller" 00:15:48.867 }' 00:15:48.867 [2024-05-15 01:44:12.775198] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:48.867 [2024-05-15 01:44:12.775309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022665 ] 00:15:49.126 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.126 [2024-05-15 01:44:12.844415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.126 [2024-05-15 01:44:12.928991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.384 Running I/O for 1 seconds... 00:15:50.318 00:15:50.318 Latency(us) 00:15:50.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.318 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:50.318 Verification LBA range: start 0x0 length 0x400 00:15:50.318 Nvme0n1 : 1.01 1655.37 103.46 0.00 0.00 38027.93 6747.78 33204.91 00:15:50.318 =================================================================================================================== 00:15:50.318 Total : 1655.37 103.46 0.00 0.00 38027.93 6747.78 33204.91 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.576 rmmod nvme_tcp 00:15:50.576 rmmod nvme_fabrics 00:15:50.576 rmmod nvme_keyring 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4022342 ']' 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4022342 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 4022342 ']' 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 4022342 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4022342 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4022342' 00:15:50.576 killing process with pid 4022342 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 4022342 00:15:50.576 [2024-05-15 01:44:14.467467] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:50.576 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 4022342 00:15:50.834 [2024-05-15 01:44:14.694418] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.834 01:44:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.367 01:44:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.367 01:44:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:53.367 00:15:53.367 real 0m8.968s 00:15:53.367 user 0m19.043s 00:15:53.367 sys 0m2.981s 00:15:53.367 01:44:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:53.367 01:44:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:53.368 ************************************ 00:15:53.368 END TEST nvmf_host_management 00:15:53.368 ************************************ 00:15:53.368 01:44:16 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:53.368 01:44:16 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:53.368 01:44:16 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:53.368 01:44:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.368 ************************************ 00:15:53.368 START TEST nvmf_lvol 00:15:53.368 ************************************ 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:53.368 * Looking for test storage... 00:15:53.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.368 01:44:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.899 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:55.900 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:55.900 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:55.900 Found net devices under 0000:09:00.0: cvl_0_0 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:55.900 Found net devices under 0000:09:00.1: cvl_0_1 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:15:55.900 00:15:55.900 --- 10.0.0.2 ping statistics --- 00:15:55.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.900 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:15:55.900 00:15:55.900 --- 10.0.0.1 ping statistics --- 00:15:55.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.900 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.900 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4025153 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4025153 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 4025153 ']' 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:55.901 [2024-05-15 01:44:19.450274] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:15:55.901 [2024-05-15 01:44:19.450360] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.901 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.901 [2024-05-15 01:44:19.524640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.901 [2024-05-15 01:44:19.609507] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.901 [2024-05-15 01:44:19.609567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.901 [2024-05-15 01:44:19.609580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.901 [2024-05-15 01:44:19.609592] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.901 [2024-05-15 01:44:19.609601] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.901 [2024-05-15 01:44:19.609692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.901 [2024-05-15 01:44:19.609762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.901 [2024-05-15 01:44:19.609759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.901 01:44:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:56.159 [2024-05-15 01:44:19.964283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.159 01:44:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.417 01:44:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:56.417 01:44:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.674 01:44:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:56.674 01:44:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:56.932 01:44:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:57.189 01:44:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ac063a78-cc86-49f0-97ab-29e8ac308113 00:15:57.189 01:44:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac063a78-cc86-49f0-97ab-29e8ac308113 lvol 20 00:15:57.446 01:44:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6dba6e8e-bd01-4eea-882e-838fb2d01282 00:15:57.446 01:44:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:57.704 01:44:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6dba6e8e-bd01-4eea-882e-838fb2d01282 00:15:57.960 01:44:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:58.217 [2024-05-15 01:44:21.989406] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:58.217 [2024-05-15 01:44:21.989687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.217 01:44:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.475 01:44:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4025463 00:15:58.475 01:44:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:58.475 01:44:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:58.475 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.435 01:44:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6dba6e8e-bd01-4eea-882e-838fb2d01282 MY_SNAPSHOT 00:15:59.693 01:44:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1ad40fbb-b9de-46d9-9f24-9aa58658df60 00:15:59.693 01:44:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6dba6e8e-bd01-4eea-882e-838fb2d01282 30 00:15:59.950 01:44:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1ad40fbb-b9de-46d9-9f24-9aa58658df60 MY_CLONE 00:16:00.207 01:44:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7775951d-bf20-46b5-89ad-4cf027b9aab6 00:16:00.207 01:44:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7775951d-bf20-46b5-89ad-4cf027b9aab6 00:16:01.139 01:44:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4025463 00:16:09.243 Initializing NVMe Controllers 00:16:09.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:09.243 Controller IO queue size 128, less than required. 00:16:09.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:09.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:09.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:09.243 Initialization complete. Launching workers. 00:16:09.243 ======================================================== 00:16:09.243 Latency(us) 00:16:09.243 Device Information : IOPS MiB/s Average min max 00:16:09.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10724.00 41.89 11944.09 2063.00 83786.50 00:16:09.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10642.20 41.57 12031.26 1918.74 75190.28 00:16:09.243 ======================================================== 00:16:09.243 Total : 21366.20 83.46 11987.51 1918.74 83786.50 00:16:09.243 00:16:09.243 01:44:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:09.244 01:44:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6dba6e8e-bd01-4eea-882e-838fb2d01282 00:16:09.244 01:44:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac063a78-cc86-49f0-97ab-29e8ac308113 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.808 rmmod nvme_tcp 00:16:09.808 rmmod nvme_fabrics 00:16:09.808 rmmod nvme_keyring 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4025153 ']' 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4025153 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 4025153 ']' 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 4025153 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4025153 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4025153' 00:16:09.808 killing process with pid 4025153 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 4025153 00:16:09.808 [2024-05-15 01:44:33.535676] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:09.808 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 4025153 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.065 01:44:33 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.969 01:44:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.969 00:16:11.969 real 0m19.018s 00:16:11.969 user 1m3.816s 00:16:11.969 sys 0m5.800s 00:16:11.969 01:44:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:11.969 01:44:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 ************************************ 00:16:11.969 END TEST nvmf_lvol 00:16:11.969 ************************************ 00:16:11.969 01:44:35 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:11.969 01:44:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:11.969 01:44:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:11.969 01:44:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 ************************************ 00:16:11.969 START TEST nvmf_lvs_grow 00:16:11.969 ************************************ 00:16:11.969 01:44:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:12.227 * Looking for test storage... 00:16:12.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.227 01:44:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.228 01:44:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:14.758 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:14.758 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:14.758 Found net devices under 0000:09:00.0: cvl_0_0 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:14.758 Found net devices under 0000:09:00.1: cvl_0_1 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:14.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:16:14.758 00:16:14.758 --- 10.0.0.2 ping statistics --- 00:16:14.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.758 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:16:14.758 00:16:14.758 --- 10.0.0.1 ping statistics --- 00:16:14.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.758 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.758 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4029050 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4029050 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 4029050 ']' 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:14.759 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:14.759 [2024-05-15 01:44:38.527678] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:14.759 [2024-05-15 01:44:38.527762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.759 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.759 [2024-05-15 01:44:38.607365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.017 [2024-05-15 01:44:38.694676] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.017 [2024-05-15 01:44:38.694728] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.017 [2024-05-15 01:44:38.694744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.017 [2024-05-15 01:44:38.694758] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.017 [2024-05-15 01:44:38.694769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.017 [2024-05-15 01:44:38.694797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.017 01:44:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:15.275 [2024-05-15 01:44:39.106234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:15.275 ************************************ 00:16:15.275 START TEST lvs_grow_clean 00:16:15.275 ************************************ 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:15.275 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:15.276 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:15.276 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.276 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.276 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:15.842 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:15.842 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:15.842 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=693dacc6-8f30-42ae-a176-7086938e69a5 00:16:15.842 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:15.842 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:16.100 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:16.100 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:16.100 01:44:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 693dacc6-8f30-42ae-a176-7086938e69a5 lvol 150 00:16:16.358 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e5b6c6b1-784d-47f1-8b9d-166e6983489f 00:16:16.358 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:16.358 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:16.616 [2024-05-15 01:44:40.444377] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:16.616 [2024-05-15 01:44:40.444461] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:16.616 true 00:16:16.616 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:16.616 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:16.874 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:16.874 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:17.132 01:44:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5b6c6b1-784d-47f1-8b9d-166e6983489f 00:16:17.389 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:17.647 [2024-05-15 01:44:41.527484] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:17.647 [2024-05-15 01:44:41.527801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.647 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4029450 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4029450 /var/tmp/bdevperf.sock 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 4029450 ']' 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:17.905 01:44:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:18.163 [2024-05-15 01:44:41.854376] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:18.163 [2024-05-15 01:44:41.854461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029450 ] 00:16:18.163 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.163 [2024-05-15 01:44:41.933865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.163 [2024-05-15 01:44:42.021381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.422 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:18.422 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:16:18.422 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:18.679 Nvme0n1 00:16:18.679 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:18.937 [ 00:16:18.937 { 00:16:18.937 "name": "Nvme0n1", 00:16:18.937 "aliases": [ 00:16:18.937 "e5b6c6b1-784d-47f1-8b9d-166e6983489f" 00:16:18.937 ], 00:16:18.937 "product_name": "NVMe disk", 00:16:18.937 "block_size": 4096, 00:16:18.937 "num_blocks": 38912, 00:16:18.937 "uuid": "e5b6c6b1-784d-47f1-8b9d-166e6983489f", 00:16:18.937 "assigned_rate_limits": { 00:16:18.937 "rw_ios_per_sec": 0, 00:16:18.937 "rw_mbytes_per_sec": 0, 00:16:18.937 "r_mbytes_per_sec": 0, 00:16:18.937 "w_mbytes_per_sec": 0 00:16:18.937 }, 00:16:18.937 "claimed": false, 00:16:18.937 "zoned": false, 00:16:18.937 "supported_io_types": { 00:16:18.937 "read": true, 00:16:18.937 "write": true, 00:16:18.937 "unmap": true, 00:16:18.937 "write_zeroes": true, 00:16:18.937 "flush": true, 00:16:18.937 "reset": true, 00:16:18.937 "compare": true, 00:16:18.937 "compare_and_write": true, 00:16:18.937 "abort": true, 00:16:18.937 "nvme_admin": true, 00:16:18.937 "nvme_io": true 00:16:18.937 }, 00:16:18.937 "memory_domains": [ 00:16:18.937 { 00:16:18.937 "dma_device_id": "system", 00:16:18.937 "dma_device_type": 1 00:16:18.937 } 00:16:18.937 ], 00:16:18.937 "driver_specific": { 00:16:18.937 "nvme": [ 00:16:18.937 { 00:16:18.937 "trid": { 00:16:18.937 "trtype": "TCP", 00:16:18.937 "adrfam": "IPv4", 00:16:18.937 "traddr": "10.0.0.2", 00:16:18.937 "trsvcid": "4420", 00:16:18.937 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:18.937 }, 00:16:18.937 "ctrlr_data": { 00:16:18.937 "cntlid": 1, 00:16:18.937 "vendor_id": "0x8086", 00:16:18.937 "model_number": "SPDK bdev Controller", 00:16:18.937 "serial_number": "SPDK0", 00:16:18.937 "firmware_revision": "24.05", 00:16:18.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.937 "oacs": { 00:16:18.937 "security": 0, 00:16:18.937 "format": 0, 00:16:18.937 "firmware": 0, 00:16:18.937 "ns_manage": 0 00:16:18.937 }, 00:16:18.937 "multi_ctrlr": true, 00:16:18.937 "ana_reporting": false 00:16:18.937 }, 00:16:18.937 "vs": { 00:16:18.937 "nvme_version": "1.3" 00:16:18.937 }, 00:16:18.937 "ns_data": { 00:16:18.937 "id": 1, 00:16:18.937 "can_share": true 00:16:18.937 } 00:16:18.937 } 00:16:18.937 ], 00:16:18.937 "mp_policy": "active_passive" 00:16:18.937 } 00:16:18.937 } 00:16:18.937 ] 00:16:19.195 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4029584 00:16:19.195 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:19.195 01:44:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:19.195 Running I/O for 10 seconds... 00:16:20.127 Latency(us) 00:16:20.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.127 Nvme0n1 : 1.00 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:16:20.127 =================================================================================================================== 00:16:20.127 Total : 14163.00 55.32 0.00 0.00 0.00 0.00 0.00 00:16:20.127 00:16:21.060 01:44:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:21.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.061 Nvme0n1 : 2.00 14646.50 57.21 0.00 0.00 0.00 0.00 0.00 00:16:21.061 =================================================================================================================== 00:16:21.061 Total : 14646.50 57.21 0.00 0.00 0.00 0.00 0.00 00:16:21.061 00:16:21.353 true 00:16:21.353 01:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:21.353 01:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:21.611 01:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:21.611 01:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:21.611 01:44:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4029584 00:16:22.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.177 Nvme0n1 : 3.00 14675.00 57.32 0.00 0.00 0.00 0.00 0.00 00:16:22.177 =================================================================================================================== 00:16:22.177 Total : 14675.00 57.32 0.00 0.00 0.00 0.00 0.00 00:16:22.177 00:16:23.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.111 Nvme0n1 : 4.00 14724.75 57.52 0.00 0.00 0.00 0.00 0.00 00:16:23.111 =================================================================================================================== 00:16:23.111 Total : 14724.75 57.52 0.00 0.00 0.00 0.00 0.00 00:16:23.111 00:16:24.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.485 Nvme0n1 : 5.00 14731.00 57.54 0.00 0.00 0.00 0.00 0.00 00:16:24.485 =================================================================================================================== 00:16:24.485 Total : 14731.00 57.54 0.00 0.00 0.00 0.00 0.00 00:16:24.485 00:16:25.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.419 Nvme0n1 : 6.00 14776.00 57.72 0.00 0.00 0.00 0.00 0.00 00:16:25.419 =================================================================================================================== 00:16:25.419 Total : 14776.00 57.72 0.00 0.00 0.00 0.00 0.00 00:16:25.419 00:16:26.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.354 Nvme0n1 : 7.00 14787.86 57.77 0.00 0.00 0.00 0.00 0.00 00:16:26.354 =================================================================================================================== 00:16:26.354 Total : 14787.86 57.77 0.00 0.00 0.00 0.00 0.00 00:16:26.354 00:16:27.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.287 Nvme0n1 : 8.00 14796.75 57.80 0.00 0.00 0.00 0.00 0.00 00:16:27.287 =================================================================================================================== 00:16:27.287 Total : 14796.75 57.80 0.00 0.00 0.00 0.00 0.00 00:16:27.287 00:16:28.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.220 Nvme0n1 : 9.00 14824.89 57.91 0.00 0.00 0.00 0.00 0.00 00:16:28.220 =================================================================================================================== 00:16:28.220 Total : 14824.89 57.91 0.00 0.00 0.00 0.00 0.00 00:16:28.220 00:16:29.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.154 Nvme0n1 : 10.00 14841.10 57.97 0.00 0.00 0.00 0.00 0.00 00:16:29.154 =================================================================================================================== 00:16:29.154 Total : 14841.10 57.97 0.00 0.00 0.00 0.00 0.00 00:16:29.154 00:16:29.154 00:16:29.154 Latency(us) 00:16:29.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.154 Nvme0n1 : 10.00 14842.89 57.98 0.00 0.00 8617.81 2536.49 17476.27 00:16:29.154 =================================================================================================================== 00:16:29.154 Total : 14842.89 57.98 0.00 0.00 8617.81 2536.49 17476.27 00:16:29.154 0 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4029450 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 4029450 ']' 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 4029450 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4029450 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4029450' 00:16:29.154 killing process with pid 4029450 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 4029450 00:16:29.154 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.154 00:16:29.154 Latency(us) 00:16:29.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.154 =================================================================================================================== 00:16:29.154 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.154 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 4029450 00:16:29.411 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:29.668 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:30.235 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:30.235 01:44:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:30.235 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:30.235 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:30.235 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:30.493 [2024-05-15 01:44:54.351018] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:30.493 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:30.750 request: 00:16:30.750 { 00:16:30.750 "uuid": "693dacc6-8f30-42ae-a176-7086938e69a5", 00:16:30.750 "method": "bdev_lvol_get_lvstores", 00:16:30.750 "req_id": 1 00:16:30.750 } 00:16:30.750 Got JSON-RPC error response 00:16:30.750 response: 00:16:30.750 { 00:16:30.750 "code": -19, 00:16:30.750 "message": "No such device" 00:16:30.750 } 00:16:30.750 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:30.750 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:30.750 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:30.750 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:30.750 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:31.007 aio_bdev 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e5b6c6b1-784d-47f1-8b9d-166e6983489f 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=e5b6c6b1-784d-47f1-8b9d-166e6983489f 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:31.007 01:44:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.264 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e5b6c6b1-784d-47f1-8b9d-166e6983489f -t 2000 00:16:31.521 [ 00:16:31.521 { 00:16:31.521 "name": "e5b6c6b1-784d-47f1-8b9d-166e6983489f", 00:16:31.521 "aliases": [ 00:16:31.521 "lvs/lvol" 00:16:31.521 ], 00:16:31.521 "product_name": "Logical Volume", 00:16:31.521 "block_size": 4096, 00:16:31.521 "num_blocks": 38912, 00:16:31.521 "uuid": "e5b6c6b1-784d-47f1-8b9d-166e6983489f", 00:16:31.521 "assigned_rate_limits": { 00:16:31.521 "rw_ios_per_sec": 0, 00:16:31.521 "rw_mbytes_per_sec": 0, 00:16:31.521 "r_mbytes_per_sec": 0, 00:16:31.521 "w_mbytes_per_sec": 0 00:16:31.521 }, 00:16:31.521 "claimed": false, 00:16:31.521 "zoned": false, 00:16:31.521 "supported_io_types": { 00:16:31.521 "read": true, 00:16:31.521 "write": true, 00:16:31.521 "unmap": true, 00:16:31.521 "write_zeroes": true, 00:16:31.521 "flush": false, 00:16:31.521 "reset": true, 00:16:31.521 "compare": false, 00:16:31.521 "compare_and_write": false, 00:16:31.521 "abort": false, 00:16:31.521 "nvme_admin": false, 00:16:31.521 "nvme_io": false 00:16:31.521 }, 00:16:31.521 "driver_specific": { 00:16:31.521 "lvol": { 00:16:31.521 "lvol_store_uuid": "693dacc6-8f30-42ae-a176-7086938e69a5", 00:16:31.521 "base_bdev": "aio_bdev", 00:16:31.521 "thin_provision": false, 00:16:31.521 "num_allocated_clusters": 38, 00:16:31.521 "snapshot": false, 00:16:31.521 "clone": false, 00:16:31.521 "esnap_clone": false 00:16:31.521 } 00:16:31.521 } 00:16:31.521 } 00:16:31.521 ] 00:16:31.521 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:16:31.521 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:31.521 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:31.779 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:31.779 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:31.779 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:32.037 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:32.037 01:44:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e5b6c6b1-784d-47f1-8b9d-166e6983489f 00:16:32.294 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 693dacc6-8f30-42ae-a176-7086938e69a5 00:16:32.552 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:32.811 00:16:32.811 real 0m17.498s 00:16:32.811 user 0m16.972s 00:16:32.811 sys 0m1.876s 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:32.811 ************************************ 00:16:32.811 END TEST lvs_grow_clean 00:16:32.811 ************************************ 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:32.811 ************************************ 00:16:32.811 START TEST lvs_grow_dirty 00:16:32.811 ************************************ 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:32.811 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:33.069 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:33.069 01:44:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:33.326 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:33.326 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:33.326 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:33.584 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:33.584 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:33.584 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 lvol 150 00:16:33.842 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:33.842 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:33.842 01:44:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:34.099 [2024-05-15 01:44:58.010591] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:34.099 [2024-05-15 01:44:58.010663] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:34.099 true 00:16:34.358 01:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:34.358 01:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:34.358 01:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:34.358 01:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:34.923 01:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:34.923 01:44:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:35.180 [2024-05-15 01:44:59.081822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.180 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4031613 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4031613 /var/tmp/bdevperf.sock 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 4031613 ']' 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:35.438 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:35.696 [2024-05-15 01:44:59.383402] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:35.696 [2024-05-15 01:44:59.383491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031613 ] 00:16:35.696 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.696 [2024-05-15 01:44:59.453903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.696 [2024-05-15 01:44:59.540790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.953 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:35.953 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:16:35.953 01:44:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:36.210 Nvme0n1 00:16:36.210 01:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:36.468 [ 00:16:36.468 { 00:16:36.468 "name": "Nvme0n1", 00:16:36.468 "aliases": [ 00:16:36.468 "94c3f84e-ebbc-4d51-ae8f-18fb8a707c99" 00:16:36.468 ], 00:16:36.468 "product_name": "NVMe disk", 00:16:36.468 "block_size": 4096, 00:16:36.469 "num_blocks": 38912, 00:16:36.469 "uuid": "94c3f84e-ebbc-4d51-ae8f-18fb8a707c99", 00:16:36.469 "assigned_rate_limits": { 00:16:36.469 "rw_ios_per_sec": 0, 00:16:36.469 "rw_mbytes_per_sec": 0, 00:16:36.469 "r_mbytes_per_sec": 0, 00:16:36.469 "w_mbytes_per_sec": 0 00:16:36.469 }, 00:16:36.469 "claimed": false, 00:16:36.469 "zoned": false, 00:16:36.469 "supported_io_types": { 00:16:36.469 "read": true, 00:16:36.469 "write": true, 00:16:36.469 "unmap": true, 00:16:36.469 "write_zeroes": true, 00:16:36.469 "flush": true, 00:16:36.469 "reset": true, 00:16:36.469 "compare": true, 00:16:36.469 "compare_and_write": true, 00:16:36.469 "abort": true, 00:16:36.469 "nvme_admin": true, 00:16:36.469 "nvme_io": true 00:16:36.469 }, 00:16:36.469 "memory_domains": [ 00:16:36.469 { 00:16:36.469 "dma_device_id": "system", 00:16:36.469 "dma_device_type": 1 00:16:36.469 } 00:16:36.469 ], 00:16:36.469 "driver_specific": { 00:16:36.469 "nvme": [ 00:16:36.469 { 00:16:36.469 "trid": { 00:16:36.469 "trtype": "TCP", 00:16:36.469 "adrfam": "IPv4", 00:16:36.469 "traddr": "10.0.0.2", 00:16:36.469 "trsvcid": "4420", 00:16:36.469 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:36.469 }, 00:16:36.469 "ctrlr_data": { 00:16:36.469 "cntlid": 1, 00:16:36.469 "vendor_id": "0x8086", 00:16:36.469 "model_number": "SPDK bdev Controller", 00:16:36.469 "serial_number": "SPDK0", 00:16:36.469 "firmware_revision": "24.05", 00:16:36.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:36.469 "oacs": { 00:16:36.469 "security": 0, 00:16:36.469 "format": 0, 00:16:36.469 "firmware": 0, 00:16:36.469 "ns_manage": 0 00:16:36.469 }, 00:16:36.469 "multi_ctrlr": true, 00:16:36.469 "ana_reporting": false 00:16:36.469 }, 00:16:36.469 "vs": { 00:16:36.469 "nvme_version": "1.3" 00:16:36.469 }, 00:16:36.469 "ns_data": { 00:16:36.469 "id": 1, 00:16:36.469 "can_share": true 00:16:36.469 } 00:16:36.469 } 00:16:36.469 ], 00:16:36.469 "mp_policy": "active_passive" 00:16:36.469 } 00:16:36.469 } 00:16:36.469 ] 00:16:36.469 01:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4031777 00:16:36.469 01:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:36.469 01:45:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:36.727 Running I/O for 10 seconds... 00:16:37.677 Latency(us) 00:16:37.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.678 Nvme0n1 : 1.00 14670.00 57.30 0.00 0.00 0.00 0.00 0.00 00:16:37.678 =================================================================================================================== 00:16:37.678 Total : 14670.00 57.30 0.00 0.00 0.00 0.00 0.00 00:16:37.678 00:16:38.637 01:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:38.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.637 Nvme0n1 : 2.00 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:16:38.638 =================================================================================================================== 00:16:38.638 Total : 15019.00 58.67 0.00 0.00 0.00 0.00 0.00 00:16:38.638 00:16:38.895 true 00:16:38.895 01:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:38.895 01:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:39.153 01:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:39.153 01:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:39.153 01:45:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4031777 00:16:39.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.721 Nvme0n1 : 3.00 14965.67 58.46 0.00 0.00 0.00 0.00 0.00 00:16:39.721 =================================================================================================================== 00:16:39.721 Total : 14965.67 58.46 0.00 0.00 0.00 0.00 0.00 00:16:39.721 00:16:40.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.657 Nvme0n1 : 4.00 15133.75 59.12 0.00 0.00 0.00 0.00 0.00 00:16:40.657 =================================================================================================================== 00:16:40.657 Total : 15133.75 59.12 0.00 0.00 0.00 0.00 0.00 00:16:40.657 00:16:41.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.593 Nvme0n1 : 5.00 15207.00 59.40 0.00 0.00 0.00 0.00 0.00 00:16:41.594 =================================================================================================================== 00:16:41.594 Total : 15207.00 59.40 0.00 0.00 0.00 0.00 0.00 00:16:41.594 00:16:42.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.529 Nvme0n1 : 6.00 15254.83 59.59 0.00 0.00 0.00 0.00 0.00 00:16:42.529 =================================================================================================================== 00:16:42.529 Total : 15254.83 59.59 0.00 0.00 0.00 0.00 0.00 00:16:42.529 00:16:43.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.904 Nvme0n1 : 7.00 15343.43 59.94 0.00 0.00 0.00 0.00 0.00 00:16:43.904 =================================================================================================================== 00:16:43.904 Total : 15343.43 59.94 0.00 0.00 0.00 0.00 0.00 00:16:43.904 00:16:44.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.838 Nvme0n1 : 8.00 15306.75 59.79 0.00 0.00 0.00 0.00 0.00 00:16:44.838 =================================================================================================================== 00:16:44.838 Total : 15306.75 59.79 0.00 0.00 0.00 0.00 0.00 00:16:44.838 00:16:45.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.774 Nvme0n1 : 9.00 15280.22 59.69 0.00 0.00 0.00 0.00 0.00 00:16:45.774 =================================================================================================================== 00:16:45.774 Total : 15280.22 59.69 0.00 0.00 0.00 0.00 0.00 00:16:45.774 00:16:46.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.711 Nvme0n1 : 10.00 15308.80 59.80 0.00 0.00 0.00 0.00 0.00 00:16:46.711 =================================================================================================================== 00:16:46.711 Total : 15308.80 59.80 0.00 0.00 0.00 0.00 0.00 00:16:46.711 00:16:46.711 00:16:46.711 Latency(us) 00:16:46.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.711 Nvme0n1 : 10.01 15309.01 59.80 0.00 0.00 8355.61 3495.25 16214.09 00:16:46.711 =================================================================================================================== 00:16:46.711 Total : 15309.01 59.80 0.00 0.00 8355.61 3495.25 16214.09 00:16:46.711 0 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4031613 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 4031613 ']' 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 4031613 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4031613 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4031613' 00:16:46.711 killing process with pid 4031613 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 4031613 00:16:46.711 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.711 00:16:46.711 Latency(us) 00:16:46.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.711 =================================================================================================================== 00:16:46.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.711 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 4031613 00:16:46.970 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:47.228 01:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:47.486 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:47.486 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4029050 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4029050 00:16:47.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4029050 Killed "${NVMF_APP[@]}" "$@" 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4033686 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4033686 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 4033686 ']' 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:47.745 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:47.745 [2024-05-15 01:45:11.618997] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:47.745 [2024-05-15 01:45:11.619082] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.745 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.004 [2024-05-15 01:45:11.697605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.004 [2024-05-15 01:45:11.784393] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.004 [2024-05-15 01:45:11.784449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.004 [2024-05-15 01:45:11.784462] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.004 [2024-05-15 01:45:11.784474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.004 [2024-05-15 01:45:11.784494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.004 [2024-05-15 01:45:11.784523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.004 01:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:48.262 [2024-05-15 01:45:12.141374] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:48.262 [2024-05-15 01:45:12.141504] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:48.262 [2024-05-15 01:45:12.141572] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:48.262 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:48.831 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 -t 2000 00:16:48.831 [ 00:16:48.831 { 00:16:48.831 "name": "94c3f84e-ebbc-4d51-ae8f-18fb8a707c99", 00:16:48.831 "aliases": [ 00:16:48.831 "lvs/lvol" 00:16:48.831 ], 00:16:48.831 "product_name": "Logical Volume", 00:16:48.831 "block_size": 4096, 00:16:48.831 "num_blocks": 38912, 00:16:48.831 "uuid": "94c3f84e-ebbc-4d51-ae8f-18fb8a707c99", 00:16:48.831 "assigned_rate_limits": { 00:16:48.831 "rw_ios_per_sec": 0, 00:16:48.831 "rw_mbytes_per_sec": 0, 00:16:48.831 "r_mbytes_per_sec": 0, 00:16:48.831 "w_mbytes_per_sec": 0 00:16:48.831 }, 00:16:48.831 "claimed": false, 00:16:48.831 "zoned": false, 00:16:48.831 "supported_io_types": { 00:16:48.831 "read": true, 00:16:48.831 "write": true, 00:16:48.831 "unmap": true, 00:16:48.831 "write_zeroes": true, 00:16:48.831 "flush": false, 00:16:48.831 "reset": true, 00:16:48.831 "compare": false, 00:16:48.831 "compare_and_write": false, 00:16:48.831 "abort": false, 00:16:48.831 "nvme_admin": false, 00:16:48.831 "nvme_io": false 00:16:48.831 }, 00:16:48.831 "driver_specific": { 00:16:48.831 "lvol": { 00:16:48.831 "lvol_store_uuid": "14915ff0-5dfb-412b-aa6c-42fc4f384127", 00:16:48.831 "base_bdev": "aio_bdev", 00:16:48.831 "thin_provision": false, 00:16:48.831 "num_allocated_clusters": 38, 00:16:48.831 "snapshot": false, 00:16:48.831 "clone": false, 00:16:48.831 "esnap_clone": false 00:16:48.831 } 00:16:48.831 } 00:16:48.831 } 00:16:48.831 ] 00:16:48.831 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:16:48.831 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:48.831 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:49.091 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:49.091 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:49.091 01:45:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:49.351 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:49.351 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:49.609 [2024-05-15 01:45:13.478859] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:49.609 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:50.178 request: 00:16:50.178 { 00:16:50.178 "uuid": "14915ff0-5dfb-412b-aa6c-42fc4f384127", 00:16:50.178 "method": "bdev_lvol_get_lvstores", 00:16:50.178 "req_id": 1 00:16:50.178 } 00:16:50.178 Got JSON-RPC error response 00:16:50.178 response: 00:16:50.178 { 00:16:50.178 "code": -19, 00:16:50.178 "message": "No such device" 00:16:50.178 } 00:16:50.178 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:16:50.178 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:50.178 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:50.178 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:50.178 01:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:50.178 aio_bdev 00:16:50.178 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:50.437 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 -t 2000 00:16:51.003 [ 00:16:51.003 { 00:16:51.003 "name": "94c3f84e-ebbc-4d51-ae8f-18fb8a707c99", 00:16:51.003 "aliases": [ 00:16:51.003 "lvs/lvol" 00:16:51.003 ], 00:16:51.003 "product_name": "Logical Volume", 00:16:51.003 "block_size": 4096, 00:16:51.003 "num_blocks": 38912, 00:16:51.003 "uuid": "94c3f84e-ebbc-4d51-ae8f-18fb8a707c99", 00:16:51.003 "assigned_rate_limits": { 00:16:51.003 "rw_ios_per_sec": 0, 00:16:51.003 "rw_mbytes_per_sec": 0, 00:16:51.003 "r_mbytes_per_sec": 0, 00:16:51.003 "w_mbytes_per_sec": 0 00:16:51.003 }, 00:16:51.003 "claimed": false, 00:16:51.003 "zoned": false, 00:16:51.003 "supported_io_types": { 00:16:51.003 "read": true, 00:16:51.003 "write": true, 00:16:51.003 "unmap": true, 00:16:51.003 "write_zeroes": true, 00:16:51.003 "flush": false, 00:16:51.003 "reset": true, 00:16:51.003 "compare": false, 00:16:51.003 "compare_and_write": false, 00:16:51.003 "abort": false, 00:16:51.003 "nvme_admin": false, 00:16:51.003 "nvme_io": false 00:16:51.003 }, 00:16:51.003 "driver_specific": { 00:16:51.003 "lvol": { 00:16:51.003 "lvol_store_uuid": "14915ff0-5dfb-412b-aa6c-42fc4f384127", 00:16:51.003 "base_bdev": "aio_bdev", 00:16:51.003 "thin_provision": false, 00:16:51.003 "num_allocated_clusters": 38, 00:16:51.003 "snapshot": false, 00:16:51.003 "clone": false, 00:16:51.003 "esnap_clone": false 00:16:51.003 } 00:16:51.003 } 00:16:51.003 } 00:16:51.003 ] 00:16:51.003 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:16:51.003 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:51.003 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:51.003 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:51.003 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:51.003 01:45:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:51.262 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:51.262 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 94c3f84e-ebbc-4d51-ae8f-18fb8a707c99 00:16:51.522 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14915ff0-5dfb-412b-aa6c-42fc4f384127 00:16:51.782 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:52.041 00:16:52.041 real 0m19.192s 00:16:52.041 user 0m48.465s 00:16:52.041 sys 0m4.833s 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:52.041 ************************************ 00:16:52.041 END TEST lvs_grow_dirty 00:16:52.041 ************************************ 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:16:52.041 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:52.042 nvmf_trace.0 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.042 01:45:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:52.042 rmmod nvme_tcp 00:16:52.302 rmmod nvme_fabrics 00:16:52.302 rmmod nvme_keyring 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4033686 ']' 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4033686 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 4033686 ']' 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 4033686 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4033686 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4033686' 00:16:52.302 killing process with pid 4033686 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 4033686 00:16:52.302 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 4033686 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.561 01:45:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.468 01:45:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.468 00:16:54.468 real 0m42.421s 00:16:54.468 user 1m11.296s 00:16:54.468 sys 0m8.897s 00:16:54.468 01:45:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:54.468 01:45:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:54.468 ************************************ 00:16:54.468 END TEST nvmf_lvs_grow 00:16:54.468 ************************************ 00:16:54.468 01:45:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:54.468 01:45:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:54.468 01:45:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:54.468 01:45:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.468 ************************************ 00:16:54.468 START TEST nvmf_bdev_io_wait 00:16:54.468 ************************************ 00:16:54.468 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:54.727 * Looking for test storage... 00:16:54.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.727 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.728 01:45:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:57.299 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:57.299 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:57.299 Found net devices under 0000:09:00.0: cvl_0_0 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:57.299 Found net devices under 0000:09:00.1: cvl_0_1 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.299 01:45:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:16:57.299 00:16:57.299 --- 10.0.0.2 ping statistics --- 00:16:57.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.299 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:16:57.299 00:16:57.299 --- 10.0.0.1 ping statistics --- 00:16:57.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.299 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4036507 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4036507 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 4036507 ']' 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:57.299 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.299 [2024-05-15 01:45:21.163628] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:57.299 [2024-05-15 01:45:21.163726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.299 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.557 [2024-05-15 01:45:21.245261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.557 [2024-05-15 01:45:21.334122] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.557 [2024-05-15 01:45:21.334184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.557 [2024-05-15 01:45:21.334208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.557 [2024-05-15 01:45:21.334233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.557 [2024-05-15 01:45:21.334246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.557 [2024-05-15 01:45:21.334305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.557 [2024-05-15 01:45:21.334358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.557 [2024-05-15 01:45:21.334477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.557 [2024-05-15 01:45:21.334480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.557 [2024-05-15 01:45:21.461932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.557 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.816 Malloc0 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:57.817 [2024-05-15 01:45:21.523383] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:57.817 [2024-05-15 01:45:21.523697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4036535 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4036537 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.817 { 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme$subsystem", 00:16:57.817 "trtype": "$TEST_TRANSPORT", 00:16:57.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "$NVMF_PORT", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.817 "hdgst": ${hdgst:-false}, 00:16:57.817 "ddgst": ${ddgst:-false} 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 } 00:16:57.817 EOF 00:16:57.817 )") 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4036539 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.817 { 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme$subsystem", 00:16:57.817 "trtype": "$TEST_TRANSPORT", 00:16:57.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "$NVMF_PORT", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.817 "hdgst": ${hdgst:-false}, 00:16:57.817 "ddgst": ${ddgst:-false} 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 } 00:16:57.817 EOF 00:16:57.817 )") 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4036542 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.817 { 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme$subsystem", 00:16:57.817 "trtype": "$TEST_TRANSPORT", 00:16:57.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "$NVMF_PORT", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.817 "hdgst": ${hdgst:-false}, 00:16:57.817 "ddgst": ${ddgst:-false} 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 } 00:16:57.817 EOF 00:16:57.817 )") 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:57.817 { 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme$subsystem", 00:16:57.817 "trtype": "$TEST_TRANSPORT", 00:16:57.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "$NVMF_PORT", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.817 "hdgst": ${hdgst:-false}, 00:16:57.817 "ddgst": ${ddgst:-false} 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 } 00:16:57.817 EOF 00:16:57.817 )") 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4036535 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme1", 00:16:57.817 "trtype": "tcp", 00:16:57.817 "traddr": "10.0.0.2", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "4420", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.817 "hdgst": false, 00:16:57.817 "ddgst": false 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 }' 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme1", 00:16:57.817 "trtype": "tcp", 00:16:57.817 "traddr": "10.0.0.2", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "4420", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.817 "hdgst": false, 00:16:57.817 "ddgst": false 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 }' 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme1", 00:16:57.817 "trtype": "tcp", 00:16:57.817 "traddr": "10.0.0.2", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "4420", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.817 "hdgst": false, 00:16:57.817 "ddgst": false 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 }' 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:57.817 01:45:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:57.817 "params": { 00:16:57.817 "name": "Nvme1", 00:16:57.817 "trtype": "tcp", 00:16:57.817 "traddr": "10.0.0.2", 00:16:57.817 "adrfam": "ipv4", 00:16:57.817 "trsvcid": "4420", 00:16:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.817 "hdgst": false, 00:16:57.817 "ddgst": false 00:16:57.817 }, 00:16:57.817 "method": "bdev_nvme_attach_controller" 00:16:57.817 }' 00:16:57.817 [2024-05-15 01:45:21.567236] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:57.817 [2024-05-15 01:45:21.567243] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:57.817 [2024-05-15 01:45:21.567316] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:57.817 [2024-05-15 01:45:21.567321] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:57.817 [2024-05-15 01:45:21.567549] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:57.817 [2024-05-15 01:45:21.567607] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:57.817 [2024-05-15 01:45:21.568948] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:16:57.817 [2024-05-15 01:45:21.569026] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:57.817 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.817 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.078 [2024-05-15 01:45:21.757341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.078 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.078 [2024-05-15 01:45:21.833141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:58.078 [2024-05-15 01:45:21.857721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.078 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.078 [2024-05-15 01:45:21.932316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:58.078 [2024-05-15 01:45:21.956829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.337 [2024-05-15 01:45:22.034203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.337 [2024-05-15 01:45:22.037163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:58.337 [2024-05-15 01:45:22.103202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:58.337 Running I/O for 1 seconds... 00:16:58.597 Running I/O for 1 seconds... 00:16:58.597 Running I/O for 1 seconds... 00:16:58.597 Running I/O for 1 seconds... 00:16:59.532 00:16:59.532 Latency(us) 00:16:59.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.532 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:59.532 Nvme1n1 : 1.01 10134.39 39.59 0.00 0.00 12575.19 8009.96 20680.25 00:16:59.532 =================================================================================================================== 00:16:59.532 Total : 10134.39 39.59 0.00 0.00 12575.19 8009.96 20680.25 00:16:59.532 00:16:59.532 Latency(us) 00:16:59.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.532 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:59.532 Nvme1n1 : 1.01 8812.02 34.42 0.00 0.00 14464.31 7330.32 25437.68 00:16:59.532 =================================================================================================================== 00:16:59.532 Total : 8812.02 34.42 0.00 0.00 14464.31 7330.32 25437.68 00:16:59.532 00:16:59.532 Latency(us) 00:16:59.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.532 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:59.532 Nvme1n1 : 1.00 166828.27 651.67 0.00 0.00 764.32 318.58 1074.06 00:16:59.532 =================================================================================================================== 00:16:59.532 Total : 166828.27 651.67 0.00 0.00 764.32 318.58 1074.06 00:16:59.532 00:16:59.532 Latency(us) 00:16:59.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.532 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:59.532 Nvme1n1 : 1.01 8813.63 34.43 0.00 0.00 14460.11 7378.87 26796.94 00:16:59.532 =================================================================================================================== 00:16:59.532 Total : 8813.63 34.43 0.00 0.00 14460.11 7378.87 26796.94 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4036537 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4036539 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4036542 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:59.791 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:59.791 rmmod nvme_tcp 00:16:59.791 rmmod nvme_fabrics 00:17:00.051 rmmod nvme_keyring 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4036507 ']' 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4036507 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 4036507 ']' 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 4036507 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4036507 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4036507' 00:17:00.051 killing process with pid 4036507 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 4036507 00:17:00.051 [2024-05-15 01:45:23.771736] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:00.051 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 4036507 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.311 01:45:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.217 01:45:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.217 00:17:02.217 real 0m7.658s 00:17:02.217 user 0m16.121s 00:17:02.217 sys 0m4.134s 00:17:02.217 01:45:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:02.217 01:45:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:02.217 ************************************ 00:17:02.217 END TEST nvmf_bdev_io_wait 00:17:02.217 ************************************ 00:17:02.217 01:45:26 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:02.217 01:45:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:02.217 01:45:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:02.217 01:45:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.217 ************************************ 00:17:02.217 START TEST nvmf_queue_depth 00:17:02.217 ************************************ 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:02.217 * Looking for test storage... 00:17:02.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.217 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.218 01:45:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.476 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:02.476 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.476 01:45:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.476 01:45:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:05.016 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.016 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:05.017 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:05.017 Found net devices under 0000:09:00.0: cvl_0_0 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:05.017 Found net devices under 0000:09:00.1: cvl_0_1 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:05.017 00:17:05.017 --- 10.0.0.2 ping statistics --- 00:17:05.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.017 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:17:05.017 00:17:05.017 --- 10.0.0.1 ping statistics --- 00:17:05.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.017 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4039159 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4039159 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 4039159 ']' 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:05.017 01:45:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.017 [2024-05-15 01:45:28.742716] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:05.017 [2024-05-15 01:45:28.742808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.017 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.017 [2024-05-15 01:45:28.826027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.017 [2024-05-15 01:45:28.913919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.017 [2024-05-15 01:45:28.913966] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.017 [2024-05-15 01:45:28.913993] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.017 [2024-05-15 01:45:28.914007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.017 [2024-05-15 01:45:28.914019] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.017 [2024-05-15 01:45:28.914056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 [2024-05-15 01:45:29.059578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 Malloc0 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 [2024-05-15 01:45:29.119787] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:05.276 [2024-05-15 01:45:29.120129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4039192 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4039192 /var/tmp/bdevperf.sock 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 4039192 ']' 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:05.276 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.276 [2024-05-15 01:45:29.164981] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:05.276 [2024-05-15 01:45:29.165065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039192 ] 00:17:05.276 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.535 [2024-05-15 01:45:29.239427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.535 [2024-05-15 01:45:29.327620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.535 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:05.535 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:05.535 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:05.535 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:05.535 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:05.795 NVMe0n1 00:17:05.796 01:45:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:05.796 01:45:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.796 Running I/O for 10 seconds... 00:17:15.866 00:17:15.866 Latency(us) 00:17:15.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.866 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:15.866 Verification LBA range: start 0x0 length 0x4000 00:17:15.866 NVMe0n1 : 10.09 8474.39 33.10 0.00 0.00 120212.32 25243.50 72623.60 00:17:15.866 =================================================================================================================== 00:17:15.866 Total : 8474.39 33.10 0.00 0.00 120212.32 25243.50 72623.60 00:17:15.866 0 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4039192 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 4039192 ']' 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 4039192 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4039192 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4039192' 00:17:15.866 killing process with pid 4039192 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 4039192 00:17:15.866 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.866 00:17:15.866 Latency(us) 00:17:15.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.866 =================================================================================================================== 00:17:15.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.866 01:45:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 4039192 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.126 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.126 rmmod nvme_tcp 00:17:16.126 rmmod nvme_fabrics 00:17:16.126 rmmod nvme_keyring 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4039159 ']' 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4039159 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 4039159 ']' 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 4039159 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4039159 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4039159' 00:17:16.386 killing process with pid 4039159 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 4039159 00:17:16.386 [2024-05-15 01:45:40.099506] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:16.386 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 4039159 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.646 01:45:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.552 01:45:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.552 00:17:18.552 real 0m16.330s 00:17:18.552 user 0m22.512s 00:17:18.552 sys 0m3.276s 00:17:18.552 01:45:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:18.552 01:45:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.552 ************************************ 00:17:18.552 END TEST nvmf_queue_depth 00:17:18.552 ************************************ 00:17:18.552 01:45:42 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:18.552 01:45:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:18.552 01:45:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:18.552 01:45:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.552 ************************************ 00:17:18.552 START TEST nvmf_target_multipath 00:17:18.552 ************************************ 00:17:18.552 01:45:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:18.811 * Looking for test storage... 00:17:18.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.811 01:45:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:21.383 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.383 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:21.384 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:21.384 Found net devices under 0000:09:00.0: cvl_0_0 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:21.384 Found net devices under 0000:09:00.1: cvl_0_1 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.384 01:45:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:21.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:17:21.384 00:17:21.384 --- 10.0.0.2 ping statistics --- 00:17:21.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.384 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:17:21.384 00:17:21.384 --- 10.0.0.1 ping statistics --- 00:17:21.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.384 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:21.384 only one NIC for nvmf test 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.384 rmmod nvme_tcp 00:17:21.384 rmmod nvme_fabrics 00:17:21.384 rmmod nvme_keyring 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.384 01:45:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.286 00:17:23.286 real 0m4.736s 00:17:23.286 user 0m0.970s 00:17:23.286 sys 0m1.783s 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:23.286 01:45:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:23.286 ************************************ 00:17:23.286 END TEST nvmf_target_multipath 00:17:23.286 ************************************ 00:17:23.544 01:45:47 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:23.544 01:45:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:23.544 01:45:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:23.544 01:45:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:23.544 ************************************ 00:17:23.544 START TEST nvmf_zcopy 00:17:23.544 ************************************ 00:17:23.544 01:45:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:23.544 * Looking for test storage... 00:17:23.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.545 01:45:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:26.076 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:26.076 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:26.076 Found net devices under 0000:09:00.0: cvl_0_0 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.076 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:26.077 Found net devices under 0000:09:00.1: cvl_0_1 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:26.077 00:17:26.077 --- 10.0.0.2 ping statistics --- 00:17:26.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.077 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:26.077 00:17:26.077 --- 10.0.0.1 ping statistics --- 00:17:26.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.077 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4044944 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4044944 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 4044944 ']' 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:26.077 01:45:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.335 [2024-05-15 01:45:50.012406] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:26.335 [2024-05-15 01:45:50.012514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.335 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.335 [2024-05-15 01:45:50.101336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.335 [2024-05-15 01:45:50.192158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.335 [2024-05-15 01:45:50.192236] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.335 [2024-05-15 01:45:50.192255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.335 [2024-05-15 01:45:50.192280] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.335 [2024-05-15 01:45:50.192292] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.335 [2024-05-15 01:45:50.192323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.594 [2024-05-15 01:45:50.341678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.594 [2024-05-15 01:45:50.357634] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:26.594 [2024-05-15 01:45:50.357941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:26.594 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.595 malloc0 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:26.595 { 00:17:26.595 "params": { 00:17:26.595 "name": "Nvme$subsystem", 00:17:26.595 "trtype": "$TEST_TRANSPORT", 00:17:26.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.595 "adrfam": "ipv4", 00:17:26.595 "trsvcid": "$NVMF_PORT", 00:17:26.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.595 "hdgst": ${hdgst:-false}, 00:17:26.595 "ddgst": ${ddgst:-false} 00:17:26.595 }, 00:17:26.595 "method": "bdev_nvme_attach_controller" 00:17:26.595 } 00:17:26.595 EOF 00:17:26.595 )") 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:26.595 01:45:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:26.595 "params": { 00:17:26.595 "name": "Nvme1", 00:17:26.595 "trtype": "tcp", 00:17:26.595 "traddr": "10.0.0.2", 00:17:26.595 "adrfam": "ipv4", 00:17:26.595 "trsvcid": "4420", 00:17:26.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.595 "hdgst": false, 00:17:26.595 "ddgst": false 00:17:26.595 }, 00:17:26.595 "method": "bdev_nvme_attach_controller" 00:17:26.595 }' 00:17:26.595 [2024-05-15 01:45:50.438946] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:26.595 [2024-05-15 01:45:50.439031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045087 ] 00:17:26.595 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.595 [2024-05-15 01:45:50.517125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.854 [2024-05-15 01:45:50.608869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.112 Running I/O for 10 seconds... 00:17:37.086 00:17:37.086 Latency(us) 00:17:37.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.086 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:37.086 Verification LBA range: start 0x0 length 0x1000 00:17:37.086 Nvme1n1 : 10.02 5868.63 45.85 0.00 0.00 21750.06 3640.89 31845.64 00:17:37.086 =================================================================================================================== 00:17:37.086 Total : 5868.63 45.85 0.00 0.00 21750.06 3640.89 31845.64 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4046273 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.344 { 00:17:37.344 "params": { 00:17:37.344 "name": "Nvme$subsystem", 00:17:37.344 "trtype": "$TEST_TRANSPORT", 00:17:37.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.344 "adrfam": "ipv4", 00:17:37.344 "trsvcid": "$NVMF_PORT", 00:17:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.344 "hdgst": ${hdgst:-false}, 00:17:37.344 "ddgst": ${ddgst:-false} 00:17:37.344 }, 00:17:37.344 "method": "bdev_nvme_attach_controller" 00:17:37.344 } 00:17:37.344 EOF 00:17:37.344 )") 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:37.344 [2024-05-15 01:46:01.145035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.145076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:37.344 01:46:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.344 "params": { 00:17:37.344 "name": "Nvme1", 00:17:37.344 "trtype": "tcp", 00:17:37.344 "traddr": "10.0.0.2", 00:17:37.344 "adrfam": "ipv4", 00:17:37.344 "trsvcid": "4420", 00:17:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.344 "hdgst": false, 00:17:37.344 "ddgst": false 00:17:37.344 }, 00:17:37.344 "method": "bdev_nvme_attach_controller" 00:17:37.344 }' 00:17:37.344 [2024-05-15 01:46:01.152992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.153014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.161032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.161057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.169051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.169076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.177074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.177099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.183416] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:37.344 [2024-05-15 01:46:01.183504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046273 ] 00:17:37.344 [2024-05-15 01:46:01.185096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.185122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.193117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.193141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.201138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.201163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.209159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.209184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.344 [2024-05-15 01:46:01.217180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.217204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.225202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.344 [2024-05-15 01:46:01.225234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.344 [2024-05-15 01:46:01.233230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.345 [2024-05-15 01:46:01.233266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.345 [2024-05-15 01:46:01.241267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.345 [2024-05-15 01:46:01.241288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.345 [2024-05-15 01:46:01.249282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.345 [2024-05-15 01:46:01.249303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.345 [2024-05-15 01:46:01.255322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.345 [2024-05-15 01:46:01.257303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.345 [2024-05-15 01:46:01.257324] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.345 [2024-05-15 01:46:01.265352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.345 [2024-05-15 01:46:01.265387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.345 [2024-05-15 01:46:01.273367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.345 [2024-05-15 01:46:01.273396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.281347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.281369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.289368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.289389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.297387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.297410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.305424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.305449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.313462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.313514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.321457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.321480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.329474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.329512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.337514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.337540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.345539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.345570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.346455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.603 [2024-05-15 01:46:01.353565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.353590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.361603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.361632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.369627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.369663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.377652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.377690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.385691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.385727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.393702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.393741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.401727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.401769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.409747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.409786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.417739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.417764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.425812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.425850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.433826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.433865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.441807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.441833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.449824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.449849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.457864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.457893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.465886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.465915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.473899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.473926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.481925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.481953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.489942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.489968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.497963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.497989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.505985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.506011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.514010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.514035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.522031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.522057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.603 [2024-05-15 01:46:01.530059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.603 [2024-05-15 01:46:01.530087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.538080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.538108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.546102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.546130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.554122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.554148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.562166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.562197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.570190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.570229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 Running I/O for 5 seconds... 00:17:37.861 [2024-05-15 01:46:01.581116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.581144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.590786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.590818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.602382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.602414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.613594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.613622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.625048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.625076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.635496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.635528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.646782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.646814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.657459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.657487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.667973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.668009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.679018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.679047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.689702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.689731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.702236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.702265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.712352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.712383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.723880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.723908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.734756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.734784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.745828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.745858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.758982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.759012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.770845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.770875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.780202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.780237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:37.861 [2024-05-15 01:46:01.792064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:37.861 [2024-05-15 01:46:01.792092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.803054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.803085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.814230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.814258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.825295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.825322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.836265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.836293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.847670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.847701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.858760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.858791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.871280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.871308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.880756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.880797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.892888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.892915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.904060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.904089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.915026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.915053] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.926130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.926158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.938541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.938569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.948855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.948886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.960264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.960291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.970851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.970881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.982268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.982295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:01.995172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:01.995200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:02.005486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:02.005514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:02.017086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:02.017114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:02.028337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:02.028365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.119 [2024-05-15 01:46:02.039108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.119 [2024-05-15 01:46:02.039136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.051459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.051487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.061712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.061744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.072571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.072601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.085668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.085698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.095933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.095973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.107064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.107092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.118177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.118204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.129335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.129362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.142104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.142131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.152432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.152459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.162995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.163023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.175141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.175169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.184430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.184458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.197446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.197473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.207657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.207685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.377 [2024-05-15 01:46:02.217899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.377 [2024-05-15 01:46:02.217926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.228518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.228546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.241042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.241070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.253633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.253661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.263063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.263091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.274026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.274054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.284293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.284320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.294482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.294509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.378 [2024-05-15 01:46:02.304452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.378 [2024-05-15 01:46:02.304481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.314985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.315014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.328144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.328172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.338281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.338309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.348533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.348560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.358948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.358977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.371011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.371039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.380402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.380429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.391282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.391309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.401683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.401710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.412075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.412102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.422205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.422240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.432581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.432609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.442587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.442614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.453013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.453041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.463122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.463149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.473622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.473649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.483522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.483550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.493343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.493370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.503394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.503421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.513610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.513637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.523880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.523907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.534240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.534268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.544556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.544584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.554899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.554926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.635 [2024-05-15 01:46:02.565170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.635 [2024-05-15 01:46:02.565198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.575176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.575204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.585386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.585416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.595653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.595681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.605936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.605964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.616152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.616180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.626651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.626678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.636885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.636913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.647325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.647352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.657448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.657476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.667838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.667867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.677756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.677783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.688257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.688284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.700719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.700746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.710347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.710374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.720904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.720931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.730991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.731019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.741068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.741095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.751808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.751835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.764641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.764669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.774770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.774797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.784867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.784895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.795502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.795539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.806156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.806183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.893 [2024-05-15 01:46:02.816415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:38.893 [2024-05-15 01:46:02.816442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.826807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.826834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.837301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.837327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.847895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.847923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.861183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.861211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.871535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.871562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.883018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.883045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.894592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.894633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.906050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.906078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.919095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.919123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.929520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.929551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.940484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.940512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.951838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.951865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.962766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.962797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.975979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.976007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.986341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.986369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:02.997800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:02.997830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.008536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.008567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.019756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.019787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.030891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.030921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.042113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.042141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.053798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.053828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.064646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.064677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.151 [2024-05-15 01:46:03.075693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.151 [2024-05-15 01:46:03.075721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.087954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.087985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.097867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.097898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.109319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.109355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.120469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.120501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.131744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.131772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.144654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.144682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.154980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.155010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.166186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.166214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.177157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.177184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.188411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.188442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.200977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.201008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.210976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.211004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.222714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.222742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.233929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.233957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.245144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.245186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.256306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.256334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.269487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.269515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.279988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.280018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.291375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.291406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.302783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.302811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.313454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.313482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.324259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.324295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.409 [2024-05-15 01:46:03.335190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.409 [2024-05-15 01:46:03.335226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.346834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.346864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.358179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.358207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.369566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.369594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.380567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.380595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.391644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.391672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.402680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.402710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.413687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.413715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.424536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.424564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.435249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.435276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.446081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.446108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.456720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.456752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.468076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.468104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.479321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.479351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.490451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.490478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.501529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.501559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.512381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.512408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.523157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.523185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.534603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.534639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.545713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.545743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.558903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.558933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.569485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.569516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.580278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.580305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.667 [2024-05-15 01:46:03.593288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.667 [2024-05-15 01:46:03.593315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.602442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.602473] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.613983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.614011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.625294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.625325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.636050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.636078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.648672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.648703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.658578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.658609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.669267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.669295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.682525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.682553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.692925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.692956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.703809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.924 [2024-05-15 01:46:03.703837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.924 [2024-05-15 01:46:03.716871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.716899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.729029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.729057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.738788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.738827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.750713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.750752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.761860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.761887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.774421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.774452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.784714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.784744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.795777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.795808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.808661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.808688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.819023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.819055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.830332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.830362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.842887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.842917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:39.925 [2024-05-15 01:46:03.852944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:39.925 [2024-05-15 01:46:03.852975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.864352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.864381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.875137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.875165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.885286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.885313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.895395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.895422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.905456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.905483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.915366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.915393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.925479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.925506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.935605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.935632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.945880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.945907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.956491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.956518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.969047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.969074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.979119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.979147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.989472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.989500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:03.999736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:03.999764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.010104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.010132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.020847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.020875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.031172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.031199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.041797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.041824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.051881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.051908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.062232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.062260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.072504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.182 [2024-05-15 01:46:04.072531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.182 [2024-05-15 01:46:04.082301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.183 [2024-05-15 01:46:04.082329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.183 [2024-05-15 01:46:04.093244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.183 [2024-05-15 01:46:04.093272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.183 [2024-05-15 01:46:04.105663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.183 [2024-05-15 01:46:04.105690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.440 [2024-05-15 01:46:04.117131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.440 [2024-05-15 01:46:04.117159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.440 [2024-05-15 01:46:04.126035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.440 [2024-05-15 01:46:04.126063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.137459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.137486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.149814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.149841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.159441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.159469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.169855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.169882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.180130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.180158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.190272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.190299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.200841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.200868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.212977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.213005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.222866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.222894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.233363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.233390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.243760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.243788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.253800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.253827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.264177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.264204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.274366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.274393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.284911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.284937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.294949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.294976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.305180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.305207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.315551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.315579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.326133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.326160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.336855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.336883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.349393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.349421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.359098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.359125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.441 [2024-05-15 01:46:04.369998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.441 [2024-05-15 01:46:04.370025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.382437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.382463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.391771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.391799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.404295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.404322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.416110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.416137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.424997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.425025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.435980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.436007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.448750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.448778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.458614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.458642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.468528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.468556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.478777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.478804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.489122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.489149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.499391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.499419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.509810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.509837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.520049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.520076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.530418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.530445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.542602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.542629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.552283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.552310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.562132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.562162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.572801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.572829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.583721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.583748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.593832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.593860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.603990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.604017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.614353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.614381] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.699 [2024-05-15 01:46:04.624938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.699 [2024-05-15 01:46:04.624965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.635580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.635608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.645854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.645881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.656578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.656606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.666902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.666930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.677918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.677946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.690763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.690791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.700763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.700790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.710688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.710715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.721052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.721079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.731105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.731132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.741575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.741603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.753970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.754007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.763289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.763316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.773923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.773950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.786468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.786495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.796486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.796514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.806805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.806832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.817203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.817238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.828014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.828056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.838280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.955 [2024-05-15 01:46:04.838308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.955 [2024-05-15 01:46:04.848608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.956 [2024-05-15 01:46:04.848637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.956 [2024-05-15 01:46:04.859388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.956 [2024-05-15 01:46:04.859416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.956 [2024-05-15 01:46:04.871800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.956 [2024-05-15 01:46:04.871828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:40.956 [2024-05-15 01:46:04.882251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:40.956 [2024-05-15 01:46:04.882279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.212 [2024-05-15 01:46:04.893166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.212 [2024-05-15 01:46:04.893195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.212 [2024-05-15 01:46:04.905807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.212 [2024-05-15 01:46:04.905834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.212 [2024-05-15 01:46:04.916248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.212 [2024-05-15 01:46:04.916276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.212 [2024-05-15 01:46:04.926379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.926406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:04.936722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.936750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:04.947114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.947143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:04.957977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.958012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:04.968821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.968850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:04.979263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.979290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:04.989839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:04.989866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.000262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.000290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.010621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.010649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.020821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.020848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.031448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.031475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.042364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.042398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.053326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.053353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.065250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.065286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.074870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.074898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.085861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.085889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.096352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.096379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.109262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.109289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.118879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.118907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.129358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.129386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.213 [2024-05-15 01:46:05.140051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.213 [2024-05-15 01:46:05.140078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.150283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.150311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.160659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.160695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.170843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.170870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.181229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.181256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.191502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.191530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.201736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.201764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.212094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.212121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.222943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.222970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.233654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.233695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.246127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.246154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.255874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.255901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.266536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.266563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.277003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.277030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.289617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.289644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.301631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.301659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.310096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.310124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.322767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.322794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.332918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.332945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.342984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.343012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.353326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.353353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.363601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.363638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.373859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.373886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.384379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.384406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.470 [2024-05-15 01:46:05.394979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.470 [2024-05-15 01:46:05.395007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.405463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.405491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.418762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.418789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.429053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.429081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.439278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.439321] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.449457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.449484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.459495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.459523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.469551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.469579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.480127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.480154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.492600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.492628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.504252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.728 [2024-05-15 01:46:05.504280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.728 [2024-05-15 01:46:05.513902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.513929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.524518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.524545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.534998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.535025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.545584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.545611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.558469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.558497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.568599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.568626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.578850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.578878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.589420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.589448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.600123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.600150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.610312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.610338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.621078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.621106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.631174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.631202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.641334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.641362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.729 [2024-05-15 01:46:05.651553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.729 [2024-05-15 01:46:05.651580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.661395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.661422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.671788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.671816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.682085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.682113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.692546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.692573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.703123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.703150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.715357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.715384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.724425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.724453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.735356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.735383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.745578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.745606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.755652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.755680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.765724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.765752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.775879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.775906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.786165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.786193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.796394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.796422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.806419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.806446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.816998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.817026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.829155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.829183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.837991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.838019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.848728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.848755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.858943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.858971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.869375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.869402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.879472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.879499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.899989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.900020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:41.987 [2024-05-15 01:46:05.910363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:41.987 [2024-05-15 01:46:05.910392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.920117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.920144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.930269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.930296] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.940198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.940234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.950279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.950306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.960263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.960290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.970287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.970315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.980748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.980776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:05.992937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:05.992966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.002837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.002865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.014311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.014339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.023779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.023807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.033908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.033936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.044335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.044362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.054846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.054873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.065168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.065195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.075245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.075272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.085884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.085912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.096440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.096469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.106730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.106757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.117436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.117464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.129570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.129597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.139544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.139571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.150628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.150655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.160938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.160965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.245 [2024-05-15 01:46:06.171160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.245 [2024-05-15 01:46:06.171188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.181196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.181235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.191471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.191498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.205093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.205121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.215098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.215125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.225575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.225603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.237308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.237335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.246894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.246922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.257859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.257886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.269953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.269981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.279449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.279477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.290702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.290730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.301345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.301373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.311418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.311446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.321735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.321763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.332262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.332289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.342536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.342564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.352675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.352703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.362999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.363037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.374170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.374198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.384894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.384921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.395556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.395584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.408489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.408516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.418873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.418900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.504 [2024-05-15 01:46:06.429667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.504 [2024-05-15 01:46:06.429695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.440151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.440179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.450774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.450802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.461689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.461720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.474114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.474141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.484094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.484122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.494500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.494527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.504756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.504784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.515430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.515458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.526165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.526196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.536482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.536509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.546986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.547013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.557443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.557471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.567929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.567965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.578359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.578386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.588095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.588122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 00:17:42.763 Latency(us) 00:17:42.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.763 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:42.763 Nvme1n1 : 5.01 12006.17 93.80 0.00 0.00 10646.15 4684.61 18544.26 00:17:42.763 =================================================================================================================== 00:17:42.763 Total : 12006.17 93.80 0.00 0.00 10646.15 4684.61 18544.26 00:17:42.763 [2024-05-15 01:46:06.594752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.594779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.602782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.602810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.610806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.610839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.618870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.618918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.626892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.626940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.634907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.634954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.642928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.642975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.650946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.650994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.658974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.659023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.666997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.667041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.675022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.763 [2024-05-15 01:46:06.675070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.763 [2024-05-15 01:46:06.683038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.764 [2024-05-15 01:46:06.683085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:42.764 [2024-05-15 01:46:06.691060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:42.764 [2024-05-15 01:46:06.691109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.699090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.699151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.707104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.707148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.715122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.715166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.723140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.723185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.731159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.731203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.739145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.739172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.747186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.747229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.755241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.755285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.763255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.763299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.771274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.771309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.779273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.779298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.787325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.787370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.795342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.795388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.803345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.803377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.811332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.811353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 [2024-05-15 01:46:06.819353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:43.022 [2024-05-15 01:46:06.819375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4046273) - No such process 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4046273 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.022 delay0 00:17:43.022 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.023 01:46:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:43.023 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.023 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.023 01:46:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.023 01:46:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:43.023 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.023 [2024-05-15 01:46:06.935205] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:51.210 [2024-05-15 01:46:13.974112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb77980 is same with the state(5) to be set 00:17:51.210 Initializing NVMe Controllers 00:17:51.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:51.210 Initialization complete. Launching workers. 00:17:51.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5242 00:17:51.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5527, failed to submit 35 00:17:51.210 success 5363, unsuccess 164, failed 0 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.210 01:46:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.210 rmmod nvme_tcp 00:17:51.210 rmmod nvme_fabrics 00:17:51.210 rmmod nvme_keyring 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4044944 ']' 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4044944 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 4044944 ']' 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 4044944 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4044944 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4044944' 00:17:51.210 killing process with pid 4044944 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 4044944 00:17:51.210 [2024-05-15 01:46:14.066035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 4044944 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.210 01:46:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.587 01:46:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.587 00:17:52.587 real 0m29.086s 00:17:52.587 user 0m42.084s 00:17:52.587 sys 0m9.195s 00:17:52.587 01:46:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:52.587 01:46:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 ************************************ 00:17:52.587 END TEST nvmf_zcopy 00:17:52.587 ************************************ 00:17:52.587 01:46:16 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:52.587 01:46:16 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:52.587 01:46:16 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:52.587 01:46:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.587 ************************************ 00:17:52.587 START TEST nvmf_nmic 00:17:52.587 ************************************ 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:52.587 * Looking for test storage... 00:17:52.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.587 01:46:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:55.116 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:55.116 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:55.116 Found net devices under 0000:09:00.0: cvl_0_0 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.116 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:55.117 Found net devices under 0000:09:00.1: cvl_0_1 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.117 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:17:55.376 00:17:55.376 --- 10.0.0.2 ping statistics --- 00:17:55.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.376 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:17:55.376 00:17:55.376 --- 10.0.0.1 ping statistics --- 00:17:55.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.376 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4050075 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4050075 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 4050075 ']' 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:55.376 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.376 [2024-05-15 01:46:19.213737] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:17:55.376 [2024-05-15 01:46:19.213808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.376 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.376 [2024-05-15 01:46:19.290009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.635 [2024-05-15 01:46:19.377672] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.635 [2024-05-15 01:46:19.377724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.635 [2024-05-15 01:46:19.377751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.635 [2024-05-15 01:46:19.377764] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.635 [2024-05-15 01:46:19.377775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.635 [2024-05-15 01:46:19.377835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.635 [2024-05-15 01:46:19.377864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.635 [2024-05-15 01:46:19.377987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.635 [2024-05-15 01:46:19.377990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.635 [2024-05-15 01:46:19.523749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.635 Malloc0 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.635 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.892 [2024-05-15 01:46:19.576896] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:55.892 [2024-05-15 01:46:19.577244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:55.892 test case1: single bdev can't be used in multiple subsystems 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.892 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.893 [2024-05-15 01:46:19.600998] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:55.893 [2024-05-15 01:46:19.601028] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:55.893 [2024-05-15 01:46:19.601043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.893 request: 00:17:55.893 { 00:17:55.893 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:55.893 "namespace": { 00:17:55.893 "bdev_name": "Malloc0", 00:17:55.893 "no_auto_visible": false 00:17:55.893 }, 00:17:55.893 "method": "nvmf_subsystem_add_ns", 00:17:55.893 "req_id": 1 00:17:55.893 } 00:17:55.893 Got JSON-RPC error response 00:17:55.893 response: 00:17:55.893 { 00:17:55.893 "code": -32602, 00:17:55.893 "message": "Invalid parameters" 00:17:55.893 } 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:55.893 Adding namespace failed - expected result. 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:55.893 test case2: host connect to nvmf target in multiple paths 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:55.893 [2024-05-15 01:46:19.609120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.893 01:46:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.457 01:46:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:57.022 01:46:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:57.022 01:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:17:57.022 01:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.022 01:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:17:57.022 01:46:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:17:58.918 01:46:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:58.918 [global] 00:17:58.918 thread=1 00:17:58.918 invalidate=1 00:17:58.918 rw=write 00:17:58.918 time_based=1 00:17:58.918 runtime=1 00:17:58.918 ioengine=libaio 00:17:58.918 direct=1 00:17:58.918 bs=4096 00:17:58.918 iodepth=1 00:17:58.918 norandommap=0 00:17:58.918 numjobs=1 00:17:58.918 00:17:58.918 verify_dump=1 00:17:58.918 verify_backlog=512 00:17:58.918 verify_state_save=0 00:17:58.918 do_verify=1 00:17:58.918 verify=crc32c-intel 00:17:58.918 [job0] 00:17:58.918 filename=/dev/nvme0n1 00:17:58.918 Could not set queue depth (nvme0n1) 00:17:59.175 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:59.175 fio-3.35 00:17:59.175 Starting 1 thread 00:18:00.547 00:18:00.547 job0: (groupid=0, jobs=1): err= 0: pid=4050589: Wed May 15 01:46:24 2024 00:18:00.547 read: IOPS=110, BW=443KiB/s (454kB/s)(448KiB/1011msec) 00:18:00.547 slat (nsec): min=5629, max=34196, avg=11895.74, stdev=6340.50 00:18:00.547 clat (usec): min=245, max=42037, avg=8075.61, stdev=16128.84 00:18:00.547 lat (usec): min=253, max=42053, avg=8087.51, stdev=16131.37 00:18:00.547 clat percentiles (usec): 00:18:00.547 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 273], 20.00th=[ 293], 00:18:00.547 | 30.00th=[ 306], 40.00th=[ 338], 50.00th=[ 412], 60.00th=[ 424], 00:18:00.547 | 70.00th=[ 433], 80.00th=[ 668], 90.00th=[41157], 95.00th=[42206], 00:18:00.547 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:00.547 | 99.99th=[42206] 00:18:00.547 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:18:00.547 slat (nsec): min=6914, max=41184, avg=16293.96, stdev=7384.36 00:18:00.547 clat (usec): min=140, max=320, avg=183.11, stdev=16.63 00:18:00.547 lat (usec): min=148, max=356, avg=199.40, stdev=21.22 00:18:00.547 clat percentiles (usec): 00:18:00.547 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:18:00.547 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:18:00.547 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:18:00.547 | 99.00th=[ 219], 99.50th=[ 231], 99.90th=[ 322], 99.95th=[ 322], 00:18:00.547 | 99.99th=[ 322] 00:18:00.547 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:00.547 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:00.547 lat (usec) : 250=82.37%, 500=13.94%, 750=0.16%, 1000=0.16% 00:18:00.547 lat (msec) : 50=3.37% 00:18:00.547 cpu : usr=1.98%, sys=0.00%, ctx=624, majf=0, minf=2 00:18:00.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:00.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.547 issued rwts: total=112,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:00.547 00:18:00.547 Run status group 0 (all jobs): 00:18:00.547 READ: bw=443KiB/s (454kB/s), 443KiB/s-443KiB/s (454kB/s-454kB/s), io=448KiB (459kB), run=1011-1011msec 00:18:00.547 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:18:00.547 00:18:00.547 Disk stats (read/write): 00:18:00.547 nvme0n1: ios=159/512, merge=0/0, ticks=886/96, in_queue=982, util=95.69% 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:00.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:00.547 rmmod nvme_tcp 00:18:00.547 rmmod nvme_fabrics 00:18:00.547 rmmod nvme_keyring 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4050075 ']' 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4050075 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 4050075 ']' 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 4050075 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4050075 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:00.547 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:00.548 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4050075' 00:18:00.548 killing process with pid 4050075 00:18:00.548 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 4050075 00:18:00.548 [2024-05-15 01:46:24.244017] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:00.548 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 4050075 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.807 01:46:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.711 01:46:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:02.711 00:18:02.711 real 0m10.144s 00:18:02.711 user 0m21.439s 00:18:02.711 sys 0m2.603s 00:18:02.711 01:46:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:02.711 01:46:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:02.711 ************************************ 00:18:02.711 END TEST nvmf_nmic 00:18:02.711 ************************************ 00:18:02.711 01:46:26 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:02.711 01:46:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:02.711 01:46:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:02.711 01:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:02.711 ************************************ 00:18:02.711 START TEST nvmf_fio_target 00:18:02.711 ************************************ 00:18:02.711 01:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:02.970 * Looking for test storage... 00:18:02.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:02.970 01:46:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.506 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:05.507 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:05.507 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:05.507 Found net devices under 0000:09:00.0: cvl_0_0 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:05.507 Found net devices under 0000:09:00.1: cvl_0_1 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:18:05.507 00:18:05.507 --- 10.0.0.2 ping statistics --- 00:18:05.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.507 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:05.507 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:05.508 00:18:05.508 --- 10.0.0.1 ping statistics --- 00:18:05.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.508 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4053068 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4053068 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 4053068 ']' 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:05.508 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.508 [2024-05-15 01:46:29.377172] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:18:05.508 [2024-05-15 01:46:29.377287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.508 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.766 [2024-05-15 01:46:29.459904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.766 [2024-05-15 01:46:29.552114] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.766 [2024-05-15 01:46:29.552178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.766 [2024-05-15 01:46:29.552195] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.766 [2024-05-15 01:46:29.552209] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.766 [2024-05-15 01:46:29.552231] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.767 [2024-05-15 01:46:29.552316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.767 [2024-05-15 01:46:29.552357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.767 [2024-05-15 01:46:29.552436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.767 [2024-05-15 01:46:29.552439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.767 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:05.767 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:18:05.767 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.767 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:05.767 01:46:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.025 01:46:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.025 01:46:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:06.283 [2024-05-15 01:46:29.975943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.283 01:46:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:06.541 01:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:06.541 01:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:06.799 01:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:06.799 01:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:07.058 01:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:07.058 01:46:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:07.316 01:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:07.316 01:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:07.882 01:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:07.882 01:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:07.882 01:46:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.447 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:08.448 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.707 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:08.707 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:08.707 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:08.965 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:08.965 01:46:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.222 01:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:09.222 01:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.480 01:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.737 [2024-05-15 01:46:33.595069] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:09.737 [2024-05-15 01:46:33.595422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.737 01:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:09.994 01:46:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:10.251 01:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:10.816 01:46:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:10.816 01:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:18:10.816 01:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.816 01:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:18:10.816 01:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:18:10.816 01:46:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:18:12.739 01:46:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:12.996 [global] 00:18:12.996 thread=1 00:18:12.996 invalidate=1 00:18:12.996 rw=write 00:18:12.996 time_based=1 00:18:12.996 runtime=1 00:18:12.996 ioengine=libaio 00:18:12.996 direct=1 00:18:12.996 bs=4096 00:18:12.996 iodepth=1 00:18:12.996 norandommap=0 00:18:12.996 numjobs=1 00:18:12.996 00:18:12.996 verify_dump=1 00:18:12.996 verify_backlog=512 00:18:12.996 verify_state_save=0 00:18:12.996 do_verify=1 00:18:12.996 verify=crc32c-intel 00:18:12.996 [job0] 00:18:12.996 filename=/dev/nvme0n1 00:18:12.996 [job1] 00:18:12.996 filename=/dev/nvme0n2 00:18:12.996 [job2] 00:18:12.996 filename=/dev/nvme0n3 00:18:12.996 [job3] 00:18:12.996 filename=/dev/nvme0n4 00:18:12.996 Could not set queue depth (nvme0n1) 00:18:12.996 Could not set queue depth (nvme0n2) 00:18:12.996 Could not set queue depth (nvme0n3) 00:18:12.996 Could not set queue depth (nvme0n4) 00:18:12.996 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.996 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.996 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.996 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.996 fio-3.35 00:18:12.996 Starting 4 threads 00:18:14.369 00:18:14.369 job0: (groupid=0, jobs=1): err= 0: pid=4054145: Wed May 15 01:46:38 2024 00:18:14.369 read: IOPS=1547, BW=6190KiB/s (6338kB/s)(6196KiB/1001msec) 00:18:14.369 slat (nsec): min=5042, max=56301, avg=13862.68, stdev=6674.20 00:18:14.369 clat (usec): min=218, max=40979, avg=357.83, stdev=1036.63 00:18:14.369 lat (usec): min=230, max=40993, avg=371.69, stdev=1036.64 00:18:14.369 clat percentiles (usec): 00:18:14.369 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:18:14.369 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:18:14.370 | 70.00th=[ 334], 80.00th=[ 424], 90.00th=[ 490], 95.00th=[ 510], 00:18:14.370 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 709], 99.95th=[41157], 00:18:14.370 | 99.99th=[41157] 00:18:14.370 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:14.370 slat (nsec): min=6183, max=36310, avg=10945.40, stdev=4605.90 00:18:14.370 clat (usec): min=135, max=303, avg=190.29, stdev=26.71 00:18:14.370 lat (usec): min=146, max=311, avg=201.24, stdev=25.11 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 165], 00:18:14.370 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 198], 00:18:14.370 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 237], 00:18:14.370 | 99.00th=[ 251], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 273], 00:18:14.370 | 99.99th=[ 306] 00:18:14.370 bw ( KiB/s): min= 8192, max= 8192, per=34.27%, avg=8192.00, stdev= 0.00, samples=1 00:18:14.370 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:14.370 lat (usec) : 250=58.74%, 500=38.25%, 750=2.97% 00:18:14.370 lat (msec) : 50=0.03% 00:18:14.370 cpu : usr=3.10%, sys=3.80%, ctx=3598, majf=0, minf=2 00:18:14.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.370 job1: (groupid=0, jobs=1): err= 0: pid=4054146: Wed May 15 01:46:38 2024 00:18:14.370 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4120KiB/1020msec) 00:18:14.370 slat (nsec): min=4727, max=68013, avg=14159.87, stdev=8463.79 00:18:14.370 clat (usec): min=199, max=41464, avg=605.59, stdev=3320.73 00:18:14.370 lat (usec): min=211, max=41472, avg=619.75, stdev=3320.86 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:18:14.370 | 30.00th=[ 247], 40.00th=[ 265], 50.00th=[ 306], 60.00th=[ 334], 00:18:14.370 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 453], 95.00th=[ 502], 00:18:14.370 | 99.00th=[ 1221], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:18:14.370 | 99.99th=[41681] 00:18:14.370 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:18:14.370 slat (nsec): min=6091, max=49882, avg=11990.95, stdev=5106.41 00:18:14.370 clat (usec): min=135, max=521, avg=230.21, stdev=59.00 00:18:14.370 lat (usec): min=144, max=537, avg=242.20, stdev=59.81 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 178], 20.00th=[ 194], 00:18:14.370 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:18:14.370 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 302], 95.00th=[ 383], 00:18:14.370 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 486], 99.95th=[ 523], 00:18:14.370 | 99.99th=[ 523] 00:18:14.370 bw ( KiB/s): min= 4800, max= 7488, per=25.70%, avg=6144.00, stdev=1900.70, samples=2 00:18:14.370 iops : min= 1200, max= 1872, avg=1536.00, stdev=475.18, samples=2 00:18:14.370 lat (usec) : 250=63.91%, 500=33.94%, 750=1.60%, 1000=0.08% 00:18:14.370 lat (msec) : 2=0.08%, 4=0.12%, 50=0.27% 00:18:14.370 cpu : usr=2.16%, sys=2.85%, ctx=2567, majf=0, minf=1 00:18:14.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.370 job2: (groupid=0, jobs=1): err= 0: pid=4054147: Wed May 15 01:46:38 2024 00:18:14.370 read: IOPS=603, BW=2412KiB/s (2470kB/s)(2480KiB/1028msec) 00:18:14.370 slat (nsec): min=5397, max=87343, avg=14637.74, stdev=8075.11 00:18:14.370 clat (usec): min=231, max=42239, avg=1248.56, stdev=5845.60 00:18:14.370 lat (usec): min=243, max=42247, avg=1263.20, stdev=5847.85 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 243], 5.00th=[ 262], 10.00th=[ 285], 20.00th=[ 318], 00:18:14.370 | 30.00th=[ 347], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:18:14.370 | 70.00th=[ 429], 80.00th=[ 465], 90.00th=[ 506], 95.00th=[ 553], 00:18:14.370 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:14.370 | 99.99th=[42206] 00:18:14.370 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:18:14.370 slat (nsec): min=6668, max=38409, avg=9001.26, stdev=3779.08 00:18:14.370 clat (usec): min=151, max=417, avg=224.65, stdev=33.31 00:18:14.370 lat (usec): min=159, max=425, avg=233.65, stdev=32.62 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 204], 00:18:14.370 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:18:14.370 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 269], 00:18:14.370 | 99.00th=[ 388], 99.50th=[ 392], 99.90th=[ 400], 99.95th=[ 416], 00:18:14.370 | 99.99th=[ 416] 00:18:14.370 bw ( KiB/s): min= 8192, max= 8192, per=34.27%, avg=8192.00, stdev= 0.00, samples=1 00:18:14.370 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:14.370 lat (usec) : 250=54.87%, 500=40.75%, 750=3.22%, 1000=0.18% 00:18:14.370 lat (msec) : 2=0.18%, 50=0.79% 00:18:14.370 cpu : usr=0.97%, sys=1.75%, ctx=1646, majf=0, minf=1 00:18:14.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 issued rwts: total=620,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.370 job3: (groupid=0, jobs=1): err= 0: pid=4054148: Wed May 15 01:46:38 2024 00:18:14.370 read: IOPS=1470, BW=5881KiB/s (6023kB/s)(5952KiB/1012msec) 00:18:14.370 slat (nsec): min=6186, max=45529, avg=12709.69, stdev=5787.13 00:18:14.370 clat (usec): min=246, max=41007, avg=399.63, stdev=1488.81 00:18:14.370 lat (usec): min=254, max=41042, avg=412.34, stdev=1489.20 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 293], 00:18:14.370 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 338], 00:18:14.370 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 445], 95.00th=[ 494], 00:18:14.370 | 99.00th=[ 586], 99.50th=[ 693], 99.90th=[40633], 99.95th=[41157], 00:18:14.370 | 99.99th=[41157] 00:18:14.370 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:18:14.370 slat (nsec): min=8092, max=45929, avg=14107.93, stdev=6939.30 00:18:14.370 clat (usec): min=169, max=490, avg=237.37, stdev=48.24 00:18:14.370 lat (usec): min=179, max=499, avg=251.48, stdev=47.90 00:18:14.370 clat percentiles (usec): 00:18:14.370 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 210], 00:18:14.370 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:18:14.370 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 293], 95.00th=[ 363], 00:18:14.370 | 99.00th=[ 416], 99.50th=[ 437], 99.90th=[ 478], 99.95th=[ 490], 00:18:14.370 | 99.99th=[ 490] 00:18:14.370 bw ( KiB/s): min= 4096, max= 8192, per=25.70%, avg=6144.00, stdev=2896.31, samples=2 00:18:14.370 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:14.370 lat (usec) : 250=41.44%, 500=56.68%, 750=1.65%, 1000=0.13% 00:18:14.370 lat (msec) : 2=0.03%, 50=0.07% 00:18:14.370 cpu : usr=3.17%, sys=5.04%, ctx=3027, majf=0, minf=1 00:18:14.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.370 issued rwts: total=1488,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.370 00:18:14.370 Run status group 0 (all jobs): 00:18:14.370 READ: bw=17.8MiB/s (18.7MB/s), 2412KiB/s-6190KiB/s (2470kB/s-6338kB/s), io=18.3MiB (19.2MB), run=1001-1028msec 00:18:14.370 WRITE: bw=23.3MiB/s (24.5MB/s), 3984KiB/s-8184KiB/s (4080kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1028msec 00:18:14.370 00:18:14.371 Disk stats (read/write): 00:18:14.371 nvme0n1: ios=1383/1536, merge=0/0, ticks=1129/300, in_queue=1429, util=85.37% 00:18:14.371 nvme0n2: ios=1073/1536, merge=0/0, ticks=1008/348, in_queue=1356, util=89.42% 00:18:14.371 nvme0n3: ios=672/1024, merge=0/0, ticks=637/224, in_queue=861, util=95.19% 00:18:14.371 nvme0n4: ios=1284/1536, merge=0/0, ticks=760/332, in_queue=1092, util=94.51% 00:18:14.371 01:46:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:14.371 [global] 00:18:14.371 thread=1 00:18:14.371 invalidate=1 00:18:14.371 rw=randwrite 00:18:14.371 time_based=1 00:18:14.371 runtime=1 00:18:14.371 ioengine=libaio 00:18:14.371 direct=1 00:18:14.371 bs=4096 00:18:14.371 iodepth=1 00:18:14.371 norandommap=0 00:18:14.371 numjobs=1 00:18:14.371 00:18:14.371 verify_dump=1 00:18:14.371 verify_backlog=512 00:18:14.371 verify_state_save=0 00:18:14.371 do_verify=1 00:18:14.371 verify=crc32c-intel 00:18:14.371 [job0] 00:18:14.371 filename=/dev/nvme0n1 00:18:14.371 [job1] 00:18:14.371 filename=/dev/nvme0n2 00:18:14.371 [job2] 00:18:14.371 filename=/dev/nvme0n3 00:18:14.371 [job3] 00:18:14.371 filename=/dev/nvme0n4 00:18:14.371 Could not set queue depth (nvme0n1) 00:18:14.371 Could not set queue depth (nvme0n2) 00:18:14.371 Could not set queue depth (nvme0n3) 00:18:14.371 Could not set queue depth (nvme0n4) 00:18:14.629 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.629 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.629 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.629 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:14.629 fio-3.35 00:18:14.629 Starting 4 threads 00:18:16.006 00:18:16.006 job0: (groupid=0, jobs=1): err= 0: pid=4054370: Wed May 15 01:46:39 2024 00:18:16.006 read: IOPS=59, BW=238KiB/s (244kB/s)(240KiB/1008msec) 00:18:16.006 slat (nsec): min=4668, max=33695, avg=14355.98, stdev=8342.85 00:18:16.006 clat (usec): min=271, max=41367, avg=14398.75, stdev=19364.87 00:18:16.006 lat (usec): min=275, max=41375, avg=14413.10, stdev=19369.49 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 297], 00:18:16.006 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 375], 60.00th=[ 392], 00:18:16.006 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:16.006 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:16.006 | 99.99th=[41157] 00:18:16.006 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:18:16.006 slat (nsec): min=6131, max=71385, avg=9837.88, stdev=4888.93 00:18:16.006 clat (usec): min=160, max=475, avg=266.81, stdev=56.30 00:18:16.006 lat (usec): min=171, max=492, avg=276.65, stdev=55.97 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 206], 20.00th=[ 235], 00:18:16.006 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 260], 00:18:16.006 | 70.00th=[ 273], 80.00th=[ 302], 90.00th=[ 375], 95.00th=[ 383], 00:18:16.006 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 478], 99.95th=[ 478], 00:18:16.006 | 99.99th=[ 478] 00:18:16.006 bw ( KiB/s): min= 4096, max= 4096, per=25.20%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.006 lat (usec) : 250=43.18%, 500=52.80%, 750=0.35% 00:18:16.006 lat (msec) : 50=3.67% 00:18:16.006 cpu : usr=0.50%, sys=0.30%, ctx=572, majf=0, minf=2 00:18:16.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 issued rwts: total=60,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.006 job1: (groupid=0, jobs=1): err= 0: pid=4054371: Wed May 15 01:46:39 2024 00:18:16.006 read: IOPS=1795, BW=7181KiB/s (7353kB/s)(7188KiB/1001msec) 00:18:16.006 slat (nsec): min=4760, max=55844, avg=14556.59, stdev=8820.40 00:18:16.006 clat (usec): min=240, max=591, avg=310.65, stdev=48.28 00:18:16.006 lat (usec): min=245, max=603, avg=325.20, stdev=49.39 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 269], 00:18:16.006 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:18:16.006 | 70.00th=[ 338], 80.00th=[ 359], 90.00th=[ 383], 95.00th=[ 392], 00:18:16.006 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 510], 99.95th=[ 594], 00:18:16.006 | 99.99th=[ 594] 00:18:16.006 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:16.006 slat (nsec): min=5883, max=38070, avg=10371.33, stdev=5044.61 00:18:16.006 clat (usec): min=154, max=305, avg=184.77, stdev=21.06 00:18:16.006 lat (usec): min=160, max=315, avg=195.14, stdev=21.88 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:18:16.006 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 186], 00:18:16.006 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 227], 00:18:16.006 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 289], 00:18:16.006 | 99.99th=[ 306] 00:18:16.006 bw ( KiB/s): min= 8192, max= 8192, per=50.40%, avg=8192.00, stdev= 0.00, samples=1 00:18:16.006 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:16.006 lat (usec) : 250=54.23%, 500=45.70%, 750=0.08% 00:18:16.006 cpu : usr=2.30%, sys=5.10%, ctx=3847, majf=0, minf=1 00:18:16.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 issued rwts: total=1797,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.006 job2: (groupid=0, jobs=1): err= 0: pid=4054372: Wed May 15 01:46:39 2024 00:18:16.006 read: IOPS=967, BW=3868KiB/s (3961kB/s)(3872KiB/1001msec) 00:18:16.006 slat (nsec): min=5532, max=50343, avg=16961.07, stdev=10339.52 00:18:16.006 clat (usec): min=237, max=41354, avg=748.68, stdev=4103.96 00:18:16.006 lat (usec): min=258, max=41360, avg=765.64, stdev=4103.66 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:18:16.006 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 334], 00:18:16.006 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 469], 00:18:16.006 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:16.006 | 99.99th=[41157] 00:18:16.006 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:16.006 slat (nsec): min=6348, max=61326, avg=14250.87, stdev=5215.69 00:18:16.006 clat (usec): min=159, max=480, avg=230.12, stdev=55.31 00:18:16.006 lat (usec): min=175, max=489, avg=244.37, stdev=53.19 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:18:16.006 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 225], 00:18:16.006 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 306], 95.00th=[ 379], 00:18:16.006 | 99.00th=[ 416], 99.50th=[ 433], 99.90th=[ 449], 99.95th=[ 482], 00:18:16.006 | 99.99th=[ 482] 00:18:16.006 bw ( KiB/s): min= 4096, max= 4096, per=25.20%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.006 lat (usec) : 250=38.10%, 500=60.69%, 750=0.65%, 1000=0.05% 00:18:16.006 lat (msec) : 50=0.50% 00:18:16.006 cpu : usr=1.90%, sys=3.10%, ctx=1995, majf=0, minf=1 00:18:16.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 issued rwts: total=968,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.006 job3: (groupid=0, jobs=1): err= 0: pid=4054373: Wed May 15 01:46:39 2024 00:18:16.006 read: IOPS=502, BW=2012KiB/s (2060kB/s)(2016KiB/1002msec) 00:18:16.006 slat (nsec): min=4875, max=35376, avg=11224.08, stdev=5461.60 00:18:16.006 clat (usec): min=216, max=41935, avg=1754.29, stdev=7318.75 00:18:16.006 lat (usec): min=222, max=41970, avg=1765.51, stdev=7320.91 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 235], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 302], 00:18:16.006 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 359], 60.00th=[ 379], 00:18:16.006 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[ 465], 95.00th=[ 486], 00:18:16.006 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:16.006 | 99.99th=[41681] 00:18:16.006 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:16.006 slat (nsec): min=6655, max=22925, avg=8035.87, stdev=2185.49 00:18:16.006 clat (usec): min=156, max=842, avg=197.84, stdev=45.57 00:18:16.006 lat (usec): min=163, max=849, avg=205.87, stdev=45.86 00:18:16.006 clat percentiles (usec): 00:18:16.006 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:18:16.006 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:18:16.006 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 247], 00:18:16.006 | 99.00th=[ 293], 99.50th=[ 400], 99.90th=[ 840], 99.95th=[ 840], 00:18:16.006 | 99.99th=[ 840] 00:18:16.006 bw ( KiB/s): min= 4096, max= 4096, per=25.20%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.006 lat (usec) : 250=48.92%, 500=48.62%, 750=0.59%, 1000=0.10% 00:18:16.006 lat (msec) : 50=1.77% 00:18:16.006 cpu : usr=0.50%, sys=1.00%, ctx=1018, majf=0, minf=1 00:18:16.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.006 issued rwts: total=504,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.006 00:18:16.006 Run status group 0 (all jobs): 00:18:16.006 READ: bw=12.9MiB/s (13.5MB/s), 238KiB/s-7181KiB/s (244kB/s-7353kB/s), io=13.0MiB (13.6MB), run=1001-1008msec 00:18:16.006 WRITE: bw=15.9MiB/s (16.6MB/s), 2032KiB/s-8184KiB/s (2081kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1008msec 00:18:16.006 00:18:16.006 Disk stats (read/write): 00:18:16.006 nvme0n1: ios=105/512, merge=0/0, ticks=696/134, in_queue=830, util=86.37% 00:18:16.006 nvme0n2: ios=1581/1722, merge=0/0, ticks=661/311, in_queue=972, util=100.00% 00:18:16.006 nvme0n3: ios=669/1024, merge=0/0, ticks=1448/234, in_queue=1682, util=93.29% 00:18:16.006 nvme0n4: ios=562/512, merge=0/0, ticks=1431/98, in_queue=1529, util=98.31% 00:18:16.006 01:46:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:16.006 [global] 00:18:16.006 thread=1 00:18:16.006 invalidate=1 00:18:16.006 rw=write 00:18:16.006 time_based=1 00:18:16.006 runtime=1 00:18:16.006 ioengine=libaio 00:18:16.006 direct=1 00:18:16.006 bs=4096 00:18:16.006 iodepth=128 00:18:16.006 norandommap=0 00:18:16.006 numjobs=1 00:18:16.006 00:18:16.006 verify_dump=1 00:18:16.006 verify_backlog=512 00:18:16.007 verify_state_save=0 00:18:16.007 do_verify=1 00:18:16.007 verify=crc32c-intel 00:18:16.007 [job0] 00:18:16.007 filename=/dev/nvme0n1 00:18:16.007 [job1] 00:18:16.007 filename=/dev/nvme0n2 00:18:16.007 [job2] 00:18:16.007 filename=/dev/nvme0n3 00:18:16.007 [job3] 00:18:16.007 filename=/dev/nvme0n4 00:18:16.007 Could not set queue depth (nvme0n1) 00:18:16.007 Could not set queue depth (nvme0n2) 00:18:16.007 Could not set queue depth (nvme0n3) 00:18:16.007 Could not set queue depth (nvme0n4) 00:18:16.007 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:16.007 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:16.007 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:16.007 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:16.007 fio-3.35 00:18:16.007 Starting 4 threads 00:18:17.382 00:18:17.382 job0: (groupid=0, jobs=1): err= 0: pid=4054608: Wed May 15 01:46:41 2024 00:18:17.382 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:18:17.382 slat (usec): min=3, max=34364, avg=131.46, stdev=1201.37 00:18:17.382 clat (usec): min=6566, max=84628, avg=16649.52, stdev=13139.98 00:18:17.382 lat (usec): min=6575, max=84665, avg=16780.98, stdev=13265.93 00:18:17.382 clat percentiles (usec): 00:18:17.382 | 1.00th=[ 7308], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:18:17.382 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[12125], 00:18:17.382 | 70.00th=[12911], 80.00th=[17433], 90.00th=[40109], 95.00th=[50070], 00:18:17.382 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65274], 99.95th=[83362], 00:18:17.382 | 99.99th=[84411] 00:18:17.382 write: IOPS=3971, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1007msec); 0 zone resets 00:18:17.382 slat (usec): min=4, max=10807, avg=122.82, stdev=624.44 00:18:17.382 clat (usec): min=4790, max=67428, avg=16901.47, stdev=12544.27 00:18:17.382 lat (usec): min=5240, max=67437, avg=17024.29, stdev=12611.48 00:18:17.382 clat percentiles (usec): 00:18:17.382 | 1.00th=[ 6390], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:18:17.382 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[12911], 00:18:17.382 | 70.00th=[20841], 80.00th=[22152], 90.00th=[24773], 95.00th=[51119], 00:18:17.382 | 99.00th=[64226], 99.50th=[64226], 99.90th=[67634], 99.95th=[67634], 00:18:17.382 | 99.99th=[67634] 00:18:17.382 bw ( KiB/s): min=10488, max=20480, per=30.84%, avg=15484.00, stdev=7065.41, samples=2 00:18:17.382 iops : min= 2622, max= 5120, avg=3871.00, stdev=1766.35, samples=2 00:18:17.382 lat (msec) : 10=31.14%, 20=43.83%, 50=19.82%, 100=5.21% 00:18:17.382 cpu : usr=5.37%, sys=7.55%, ctx=370, majf=0, minf=17 00:18:17.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:17.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.382 issued rwts: total=3584,3999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.382 job1: (groupid=0, jobs=1): err= 0: pid=4054609: Wed May 15 01:46:41 2024 00:18:17.382 read: IOPS=2335, BW=9344KiB/s (9568kB/s)(9456KiB/1012msec) 00:18:17.382 slat (usec): min=2, max=34451, avg=188.53, stdev=1471.91 00:18:17.382 clat (usec): min=449, max=65140, avg=21256.23, stdev=13420.35 00:18:17.382 lat (usec): min=5148, max=65151, avg=21444.76, stdev=13489.77 00:18:17.382 clat percentiles (usec): 00:18:17.382 | 1.00th=[ 6456], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10421], 00:18:17.382 | 30.00th=[11076], 40.00th=[13042], 50.00th=[16581], 60.00th=[20841], 00:18:17.382 | 70.00th=[27132], 80.00th=[29754], 90.00th=[40633], 95.00th=[52167], 00:18:17.382 | 99.00th=[62129], 99.50th=[63177], 99.90th=[65274], 99.95th=[65274], 00:18:17.382 | 99.99th=[65274] 00:18:17.382 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:18:17.382 slat (usec): min=3, max=41527, avg=211.97, stdev=1377.64 00:18:17.382 clat (usec): min=1471, max=93092, avg=30481.68, stdev=15765.37 00:18:17.382 lat (usec): min=1479, max=93104, avg=30693.64, stdev=15850.51 00:18:17.382 clat percentiles (usec): 00:18:17.382 | 1.00th=[ 4228], 5.00th=[ 8717], 10.00th=[19530], 20.00th=[20579], 00:18:17.382 | 30.00th=[20841], 40.00th=[21627], 50.00th=[28181], 60.00th=[30278], 00:18:17.382 | 70.00th=[31589], 80.00th=[40633], 90.00th=[53216], 95.00th=[57934], 00:18:17.382 | 99.00th=[87557], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:18:17.382 | 99.99th=[92799] 00:18:17.382 bw ( KiB/s): min= 8496, max=11984, per=20.39%, avg=10240.00, stdev=2466.39, samples=2 00:18:17.382 iops : min= 2124, max= 2996, avg=2560.00, stdev=616.60, samples=2 00:18:17.382 lat (usec) : 500=0.02% 00:18:17.382 lat (msec) : 2=0.06%, 4=0.41%, 10=5.93%, 20=27.50%, 50=57.21% 00:18:17.382 lat (msec) : 100=8.87% 00:18:17.382 cpu : usr=2.27%, sys=2.87%, ctx=303, majf=0, minf=7 00:18:17.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:17.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.382 issued rwts: total=2364,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.382 job2: (groupid=0, jobs=1): err= 0: pid=4054610: Wed May 15 01:46:41 2024 00:18:17.382 read: IOPS=3051, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:18:17.382 slat (usec): min=3, max=16703, avg=141.22, stdev=1037.24 00:18:17.382 clat (usec): min=2393, max=54966, avg=18656.02, stdev=10070.86 00:18:17.382 lat (usec): min=2412, max=54989, avg=18797.24, stdev=10147.11 00:18:17.382 clat percentiles (usec): 00:18:17.382 | 1.00th=[ 4621], 5.00th=[ 8455], 10.00th=[10814], 20.00th=[13042], 00:18:17.382 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14222], 60.00th=[15008], 00:18:17.382 | 70.00th=[19530], 80.00th=[27395], 90.00th=[30540], 95.00th=[42206], 00:18:17.382 | 99.00th=[49546], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:18:17.382 | 99.99th=[54789] 00:18:17.382 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:18:17.382 slat (usec): min=4, max=22420, avg=162.42, stdev=944.35 00:18:17.382 clat (usec): min=2789, max=52462, avg=22856.29, stdev=8281.19 00:18:17.382 lat (usec): min=2798, max=52509, avg=23018.71, stdev=8376.04 00:18:17.382 clat percentiles (usec): 00:18:17.382 | 1.00th=[ 5342], 5.00th=[10552], 10.00th=[11863], 20.00th=[13566], 00:18:17.382 | 30.00th=[20055], 40.00th=[21890], 50.00th=[22152], 60.00th=[22938], 00:18:17.382 | 70.00th=[29754], 80.00th=[30278], 90.00th=[31589], 95.00th=[33162], 00:18:17.382 | 99.00th=[44827], 99.50th=[45876], 99.90th=[45876], 99.95th=[52167], 00:18:17.382 | 99.99th=[52691] 00:18:17.382 bw ( KiB/s): min=12288, max=12288, per=24.47%, avg=12288.00, stdev= 0.00, samples=2 00:18:17.382 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:17.382 lat (msec) : 4=0.60%, 10=5.66%, 20=44.00%, 50=49.25%, 100=0.49% 00:18:17.382 cpu : usr=4.69%, sys=5.88%, ctx=339, majf=0, minf=13 00:18:17.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:17.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.383 issued rwts: total=3064,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.383 job3: (groupid=0, jobs=1): err= 0: pid=4054611: Wed May 15 01:46:41 2024 00:18:17.383 read: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec) 00:18:17.383 slat (usec): min=3, max=22462, avg=174.16, stdev=1209.42 00:18:17.383 clat (usec): min=3351, max=62442, avg=19258.18, stdev=11407.29 00:18:17.383 lat (usec): min=5592, max=62449, avg=19432.35, stdev=11519.92 00:18:17.383 clat percentiles (usec): 00:18:17.383 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[11076], 00:18:17.383 | 30.00th=[11338], 40.00th=[11600], 50.00th=[13566], 60.00th=[17433], 00:18:17.383 | 70.00th=[22152], 80.00th=[28705], 90.00th=[35390], 95.00th=[41157], 00:18:17.383 | 99.00th=[60031], 99.50th=[61080], 99.90th=[62653], 99.95th=[62653], 00:18:17.383 | 99.99th=[62653] 00:18:17.383 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:18:17.383 slat (usec): min=4, max=24763, avg=144.46, stdev=949.21 00:18:17.383 clat (usec): min=3454, max=62444, avg=22548.99, stdev=10231.17 00:18:17.383 lat (usec): min=3463, max=62456, avg=22693.45, stdev=10308.95 00:18:17.383 clat percentiles (usec): 00:18:17.383 | 1.00th=[ 5538], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11207], 00:18:17.383 | 30.00th=[19006], 40.00th=[20317], 50.00th=[20841], 60.00th=[21627], 00:18:17.383 | 70.00th=[29754], 80.00th=[30278], 90.00th=[31589], 95.00th=[39584], 00:18:17.383 | 99.00th=[53216], 99.50th=[53216], 99.90th=[61604], 99.95th=[62653], 00:18:17.383 | 99.99th=[62653] 00:18:17.383 bw ( KiB/s): min=12136, max=12440, per=24.47%, avg=12288.00, stdev=214.96, samples=2 00:18:17.383 iops : min= 3034, max= 3110, avg=3072.00, stdev=53.74, samples=2 00:18:17.383 lat (msec) : 4=0.11%, 10=9.58%, 20=40.14%, 50=47.03%, 100=3.13% 00:18:17.383 cpu : usr=4.18%, sys=6.47%, ctx=339, majf=0, minf=13 00:18:17.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:17.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.383 issued rwts: total=3024,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.383 00:18:17.383 Run status group 0 (all jobs): 00:18:17.383 READ: bw=46.5MiB/s (48.7MB/s), 9344KiB/s-13.9MiB/s (9568kB/s-14.6MB/s), io=47.0MiB (49.3MB), run=1004-1012msec 00:18:17.383 WRITE: bw=49.0MiB/s (51.4MB/s), 9.88MiB/s-15.5MiB/s (10.4MB/s-16.3MB/s), io=49.6MiB (52.0MB), run=1004-1012msec 00:18:17.383 00:18:17.383 Disk stats (read/write): 00:18:17.383 nvme0n1: ios=3113/3584, merge=0/0, ticks=26349/24802, in_queue=51151, util=98.10% 00:18:17.383 nvme0n2: ios=2090/2119, merge=0/0, ticks=39446/63819, in_queue=103265, util=97.76% 00:18:17.383 nvme0n3: ios=2618/2639, merge=0/0, ticks=48066/55992, in_queue=104058, util=98.44% 00:18:17.383 nvme0n4: ios=2300/2560, merge=0/0, ticks=48239/57181, in_queue=105420, util=98.53% 00:18:17.383 01:46:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:17.383 [global] 00:18:17.383 thread=1 00:18:17.383 invalidate=1 00:18:17.383 rw=randwrite 00:18:17.383 time_based=1 00:18:17.383 runtime=1 00:18:17.383 ioengine=libaio 00:18:17.383 direct=1 00:18:17.383 bs=4096 00:18:17.383 iodepth=128 00:18:17.383 norandommap=0 00:18:17.383 numjobs=1 00:18:17.383 00:18:17.383 verify_dump=1 00:18:17.383 verify_backlog=512 00:18:17.383 verify_state_save=0 00:18:17.383 do_verify=1 00:18:17.383 verify=crc32c-intel 00:18:17.383 [job0] 00:18:17.383 filename=/dev/nvme0n1 00:18:17.383 [job1] 00:18:17.383 filename=/dev/nvme0n2 00:18:17.383 [job2] 00:18:17.383 filename=/dev/nvme0n3 00:18:17.383 [job3] 00:18:17.383 filename=/dev/nvme0n4 00:18:17.383 Could not set queue depth (nvme0n1) 00:18:17.383 Could not set queue depth (nvme0n2) 00:18:17.383 Could not set queue depth (nvme0n3) 00:18:17.383 Could not set queue depth (nvme0n4) 00:18:17.383 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.383 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.383 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.383 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.383 fio-3.35 00:18:17.383 Starting 4 threads 00:18:18.760 00:18:18.760 job0: (groupid=0, jobs=1): err= 0: pid=4054837: Wed May 15 01:46:42 2024 00:18:18.760 read: IOPS=2496, BW=9985KiB/s (10.2MB/s)(10.2MiB/1050msec) 00:18:18.760 slat (usec): min=2, max=18165, avg=136.23, stdev=924.95 00:18:18.760 clat (usec): min=5342, max=68350, avg=17591.67, stdev=10582.31 00:18:18.760 lat (usec): min=5345, max=68355, avg=17727.90, stdev=10658.20 00:18:18.760 clat percentiles (usec): 00:18:18.760 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[12780], 00:18:18.760 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14615], 60.00th=[15008], 00:18:18.760 | 70.00th=[15664], 80.00th=[18220], 90.00th=[27395], 95.00th=[45876], 00:18:18.760 | 99.00th=[63701], 99.50th=[65274], 99.90th=[68682], 99.95th=[68682], 00:18:18.760 | 99.99th=[68682] 00:18:18.760 write: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1050msec); 0 zone resets 00:18:18.760 slat (usec): min=3, max=44007, avg=190.07, stdev=1712.06 00:18:18.760 clat (usec): min=409, max=98916, avg=27585.61, stdev=17680.22 00:18:18.760 lat (usec): min=428, max=98933, avg=27775.68, stdev=17824.22 00:18:18.760 clat percentiles (usec): 00:18:18.760 | 1.00th=[ 4359], 5.00th=[11076], 10.00th=[12256], 20.00th=[14091], 00:18:18.760 | 30.00th=[15270], 40.00th=[16712], 50.00th=[22938], 60.00th=[24511], 00:18:18.760 | 70.00th=[27395], 80.00th=[47973], 90.00th=[55313], 95.00th=[64750], 00:18:18.760 | 99.00th=[70779], 99.50th=[70779], 99.90th=[71828], 99.95th=[92799], 00:18:18.760 | 99.99th=[99091] 00:18:18.760 bw ( KiB/s): min=10744, max=13296, per=20.20%, avg=12020.00, stdev=1804.54, samples=2 00:18:18.760 iops : min= 2686, max= 3324, avg=3005.00, stdev=451.13, samples=2 00:18:18.760 lat (usec) : 500=0.04% 00:18:18.760 lat (msec) : 2=0.12%, 4=0.18%, 10=6.36%, 20=57.58%, 50=24.93% 00:18:18.760 lat (msec) : 100=10.80% 00:18:18.760 cpu : usr=2.48%, sys=4.00%, ctx=240, majf=0, minf=13 00:18:18.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:18.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.760 issued rwts: total=2621,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.761 job1: (groupid=0, jobs=1): err= 0: pid=4054838: Wed May 15 01:46:42 2024 00:18:18.761 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:18:18.761 slat (usec): min=2, max=24237, avg=100.88, stdev=680.73 00:18:18.761 clat (usec): min=2043, max=44747, avg=13065.83, stdev=5113.15 00:18:18.761 lat (usec): min=2201, max=44790, avg=13166.70, stdev=5162.76 00:18:18.761 clat percentiles (usec): 00:18:18.761 | 1.00th=[ 5276], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:18:18.761 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:18:18.761 | 70.00th=[11994], 80.00th=[13435], 90.00th=[20579], 95.00th=[24511], 00:18:18.761 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[34866], 00:18:18.761 | 99.99th=[44827] 00:18:18.761 write: IOPS=4841, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec); 0 zone resets 00:18:18.761 slat (usec): min=3, max=20051, avg=100.80, stdev=675.63 00:18:18.761 clat (usec): min=1002, max=48094, avg=13825.32, stdev=6548.48 00:18:18.761 lat (usec): min=1014, max=48206, avg=13926.12, stdev=6614.23 00:18:18.761 clat percentiles (usec): 00:18:18.761 | 1.00th=[ 4359], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10683], 00:18:18.761 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:18:18.761 | 70.00th=[11469], 80.00th=[16057], 90.00th=[22938], 95.00th=[31327], 00:18:18.761 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40109], 99.95th=[40633], 00:18:18.761 | 99.99th=[47973] 00:18:18.761 bw ( KiB/s): min=18056, max=19856, per=31.86%, avg=18956.00, stdev=1272.79, samples=2 00:18:18.761 iops : min= 4514, max= 4964, avg=4739.00, stdev=318.20, samples=2 00:18:18.761 lat (msec) : 2=0.16%, 4=0.44%, 10=7.42%, 20=77.54%, 50=14.44% 00:18:18.761 cpu : usr=4.18%, sys=7.67%, ctx=577, majf=0, minf=11 00:18:18.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.761 issued rwts: total=4608,4866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.761 job2: (groupid=0, jobs=1): err= 0: pid=4054851: Wed May 15 01:46:42 2024 00:18:18.761 read: IOPS=2277, BW=9109KiB/s (9328kB/s)(9200KiB/1010msec) 00:18:18.761 slat (usec): min=2, max=20399, avg=186.01, stdev=1255.12 00:18:18.761 clat (usec): min=3459, max=57675, avg=21276.26, stdev=8749.54 00:18:18.761 lat (usec): min=6350, max=57683, avg=21462.27, stdev=8849.49 00:18:18.761 clat percentiles (usec): 00:18:18.761 | 1.00th=[ 8848], 5.00th=[11207], 10.00th=[11338], 20.00th=[13566], 00:18:18.761 | 30.00th=[14615], 40.00th=[17171], 50.00th=[20841], 60.00th=[23725], 00:18:18.761 | 70.00th=[24773], 80.00th=[28705], 90.00th=[33162], 95.00th=[36963], 00:18:18.761 | 99.00th=[47449], 99.50th=[52167], 99.90th=[57410], 99.95th=[57934], 00:18:18.761 | 99.99th=[57934] 00:18:18.761 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:18:18.761 slat (usec): min=3, max=21828, avg=218.05, stdev=1108.77 00:18:18.761 clat (usec): min=2633, max=82794, avg=30911.40, stdev=16299.18 00:18:18.761 lat (usec): min=2640, max=82803, avg=31129.45, stdev=16405.34 00:18:18.761 clat percentiles (usec): 00:18:18.761 | 1.00th=[ 4752], 5.00th=[10421], 10.00th=[11338], 20.00th=[20317], 00:18:18.761 | 30.00th=[22938], 40.00th=[24249], 50.00th=[24511], 60.00th=[25297], 00:18:18.761 | 70.00th=[36963], 80.00th=[47449], 90.00th=[55313], 95.00th=[58983], 00:18:18.761 | 99.00th=[78119], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:18:18.761 | 99.99th=[82314] 00:18:18.761 bw ( KiB/s): min=10192, max=10288, per=17.21%, avg=10240.00, stdev=67.88, samples=2 00:18:18.761 iops : min= 2548, max= 2572, avg=2560.00, stdev=16.97, samples=2 00:18:18.761 lat (msec) : 4=0.43%, 10=2.84%, 20=28.99%, 50=58.02%, 100=9.71% 00:18:18.761 cpu : usr=2.08%, sys=4.56%, ctx=318, majf=0, minf=13 00:18:18.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.761 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.761 job3: (groupid=0, jobs=1): err= 0: pid=4054857: Wed May 15 01:46:42 2024 00:18:18.761 read: IOPS=4612, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1004msec) 00:18:18.761 slat (usec): min=2, max=6917, avg=102.58, stdev=575.03 00:18:18.761 clat (usec): min=3863, max=26345, avg=13038.88, stdev=2129.67 00:18:18.761 lat (usec): min=3868, max=26357, avg=13141.46, stdev=2167.84 00:18:18.761 clat percentiles (usec): 00:18:18.761 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10945], 20.00th=[11994], 00:18:18.761 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:18:18.761 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15533], 95.00th=[18220], 00:18:18.761 | 99.00th=[20317], 99.50th=[20317], 99.90th=[24249], 99.95th=[25035], 00:18:18.761 | 99.99th=[26346] 00:18:18.761 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:18:18.761 slat (usec): min=3, max=6621, avg=96.90, stdev=520.10 00:18:18.761 clat (usec): min=7093, max=26112, avg=12998.16, stdev=2168.75 00:18:18.761 lat (usec): min=8014, max=26144, avg=13095.06, stdev=2208.42 00:18:18.761 clat percentiles (usec): 00:18:18.761 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[12125], 00:18:18.761 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:18:18.761 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14877], 95.00th=[19006], 00:18:18.761 | 99.00th=[19792], 99.50th=[19792], 99.90th=[25297], 99.95th=[25822], 00:18:18.761 | 99.99th=[26084] 00:18:18.761 bw ( KiB/s): min=19648, max=20480, per=33.72%, avg=20064.00, stdev=588.31, samples=2 00:18:18.761 iops : min= 4912, max= 5120, avg=5016.00, stdev=147.08, samples=2 00:18:18.761 lat (msec) : 4=0.14%, 10=4.72%, 20=94.57%, 50=0.56% 00:18:18.761 cpu : usr=3.29%, sys=5.28%, ctx=467, majf=0, minf=15 00:18:18.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:18.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.761 issued rwts: total=4631,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.761 00:18:18.761 Run status group 0 (all jobs): 00:18:18.761 READ: bw=52.7MiB/s (55.2MB/s), 9109KiB/s-18.0MiB/s (9328kB/s-18.9MB/s), io=55.3MiB (58.0MB), run=1004-1050msec 00:18:18.761 WRITE: bw=58.1MiB/s (60.9MB/s), 9.90MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=61.0MiB (64.0MB), run=1004-1050msec 00:18:18.761 00:18:18.761 Disk stats (read/write): 00:18:18.761 nvme0n1: ios=2072/2511, merge=0/0, ticks=22444/42232, in_queue=64676, util=96.69% 00:18:18.761 nvme0n2: ios=4109/4474, merge=0/0, ticks=24142/26753, in_queue=50895, util=86.38% 00:18:18.761 nvme0n3: ios=1779/2048, merge=0/0, ticks=38574/66258, in_queue=104832, util=88.81% 00:18:18.761 nvme0n4: ios=4143/4398, merge=0/0, ticks=16767/17060, in_queue=33827, util=97.15% 00:18:18.761 01:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:18.761 01:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4055055 00:18:18.761 01:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:18.761 01:46:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:18.761 [global] 00:18:18.761 thread=1 00:18:18.761 invalidate=1 00:18:18.761 rw=read 00:18:18.761 time_based=1 00:18:18.761 runtime=10 00:18:18.761 ioengine=libaio 00:18:18.761 direct=1 00:18:18.761 bs=4096 00:18:18.761 iodepth=1 00:18:18.761 norandommap=1 00:18:18.761 numjobs=1 00:18:18.761 00:18:18.761 [job0] 00:18:18.761 filename=/dev/nvme0n1 00:18:18.761 [job1] 00:18:18.761 filename=/dev/nvme0n2 00:18:18.761 [job2] 00:18:18.761 filename=/dev/nvme0n3 00:18:18.761 [job3] 00:18:18.761 filename=/dev/nvme0n4 00:18:18.761 Could not set queue depth (nvme0n1) 00:18:18.761 Could not set queue depth (nvme0n2) 00:18:18.761 Could not set queue depth (nvme0n3) 00:18:18.761 Could not set queue depth (nvme0n4) 00:18:19.020 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.020 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.020 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.020 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:19.020 fio-3.35 00:18:19.020 Starting 4 threads 00:18:22.301 01:46:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:22.301 01:46:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:22.301 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=520192, buflen=4096 00:18:22.301 fio: pid=4055191, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:22.301 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:22.301 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:22.301 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9945088, buflen=4096 00:18:22.301 fio: pid=4055190, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:22.559 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55320576, buflen=4096 00:18:22.559 fio: pid=4055183, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:22.559 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:22.559 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:22.818 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3334144, buflen=4096 00:18:22.818 fio: pid=4055189, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:22.818 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:22.818 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:22.818 00:18:22.818 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055183: Wed May 15 01:46:46 2024 00:18:22.818 read: IOPS=3985, BW=15.6MiB/s (16.3MB/s)(52.8MiB/3389msec) 00:18:22.818 slat (usec): min=5, max=10718, avg=12.57, stdev=157.28 00:18:22.818 clat (usec): min=189, max=1421, avg=234.85, stdev=33.18 00:18:22.818 lat (usec): min=195, max=11020, avg=247.42, stdev=162.32 00:18:22.818 clat percentiles (usec): 00:18:22.818 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:18:22.818 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 239], 00:18:22.818 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 277], 00:18:22.818 | 99.00th=[ 310], 99.50th=[ 367], 99.90th=[ 545], 99.95th=[ 553], 00:18:22.818 | 99.99th=[ 1287] 00:18:22.818 bw ( KiB/s): min=14040, max=18016, per=86.78%, avg=16097.33, stdev=1400.17, samples=6 00:18:22.818 iops : min= 3510, max= 4504, avg=4024.33, stdev=350.04, samples=6 00:18:22.818 lat (usec) : 250=70.87%, 500=28.96%, 750=0.14% 00:18:22.818 lat (msec) : 2=0.02% 00:18:22.818 cpu : usr=2.72%, sys=6.14%, ctx=13511, majf=0, minf=1 00:18:22.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.818 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.818 issued rwts: total=13507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.818 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055189: Wed May 15 01:46:46 2024 00:18:22.818 read: IOPS=223, BW=895KiB/s (916kB/s)(3256KiB/3639msec) 00:18:22.818 slat (usec): min=5, max=8509, avg=26.46, stdev=315.32 00:18:22.818 clat (usec): min=215, max=43949, avg=4440.67, stdev=12399.38 00:18:22.818 lat (usec): min=221, max=50573, avg=4467.12, stdev=12452.00 00:18:22.818 clat percentiles (usec): 00:18:22.818 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:18:22.818 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:18:22.818 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[40633], 95.00th=[41157], 00:18:22.818 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:18:22.818 | 99.99th=[43779] 00:18:22.818 bw ( KiB/s): min= 96, max= 4344, per=4.82%, avg=894.14, stdev=1576.94, samples=7 00:18:22.818 iops : min= 24, max= 1086, avg=223.43, stdev=394.21, samples=7 00:18:22.818 lat (usec) : 250=25.89%, 500=63.44%, 750=0.12%, 1000=0.25% 00:18:22.818 lat (msec) : 50=10.18% 00:18:22.818 cpu : usr=0.33%, sys=0.19%, ctx=820, majf=0, minf=1 00:18:22.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.818 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.818 issued rwts: total=815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.818 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055190: Wed May 15 01:46:46 2024 00:18:22.818 read: IOPS=769, BW=3077KiB/s (3151kB/s)(9712KiB/3156msec) 00:18:22.818 slat (nsec): min=5476, max=49093, avg=13533.60, stdev=5826.34 00:18:22.818 clat (usec): min=224, max=41982, avg=1282.85, stdev=6280.13 00:18:22.818 lat (usec): min=233, max=42001, avg=1296.39, stdev=6281.19 00:18:22.818 clat percentiles (usec): 00:18:22.818 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 273], 00:18:22.818 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:18:22.818 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:18:22.818 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:18:22.818 | 99.99th=[42206] 00:18:22.819 bw ( KiB/s): min= 96, max=12576, per=17.42%, avg=3232.00, stdev=5194.41, samples=6 00:18:22.819 iops : min= 24, max= 3144, avg=808.00, stdev=1298.60, samples=6 00:18:22.819 lat (usec) : 250=3.46%, 500=93.95%, 750=0.04%, 1000=0.04% 00:18:22.819 lat (msec) : 20=0.04%, 50=2.43% 00:18:22.819 cpu : usr=0.57%, sys=1.65%, ctx=2431, majf=0, minf=1 00:18:22.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.819 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.819 issued rwts: total=2429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.819 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4055191: Wed May 15 01:46:46 2024 00:18:22.819 read: IOPS=44, BW=176KiB/s (180kB/s)(508KiB/2894msec) 00:18:22.819 slat (nsec): min=7834, max=38197, avg=19405.51, stdev=9950.48 00:18:22.819 clat (usec): min=271, max=42287, avg=22758.55, stdev=20266.21 00:18:22.819 lat (usec): min=304, max=42305, avg=22777.99, stdev=20268.68 00:18:22.819 clat percentiles (usec): 00:18:22.819 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 371], 20.00th=[ 392], 00:18:22.819 | 30.00th=[ 408], 40.00th=[ 453], 50.00th=[40633], 60.00th=[41157], 00:18:22.819 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:22.819 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:22.819 | 99.99th=[42206] 00:18:22.819 bw ( KiB/s): min= 96, max= 432, per=1.01%, avg=187.20, stdev=140.35, samples=5 00:18:22.819 iops : min= 24, max= 108, avg=46.80, stdev=35.09, samples=5 00:18:22.819 lat (usec) : 500=42.19%, 750=2.34% 00:18:22.819 lat (msec) : 50=54.69% 00:18:22.819 cpu : usr=0.07%, sys=0.07%, ctx=129, majf=0, minf=1 00:18:22.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.819 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.819 issued rwts: total=128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.819 00:18:22.819 Run status group 0 (all jobs): 00:18:22.819 READ: bw=18.1MiB/s (19.0MB/s), 176KiB/s-15.6MiB/s (180kB/s-16.3MB/s), io=65.9MiB (69.1MB), run=2894-3639msec 00:18:22.819 00:18:22.819 Disk stats (read/write): 00:18:22.819 nvme0n1: ios=13409/0, merge=0/0, ticks=3020/0, in_queue=3020, util=94.97% 00:18:22.819 nvme0n2: ios=813/0, merge=0/0, ticks=3568/0, in_queue=3568, util=96.28% 00:18:22.819 nvme0n3: ios=2470/0, merge=0/0, ticks=3159/0, in_queue=3159, util=99.94% 00:18:22.819 nvme0n4: ios=171/0, merge=0/0, ticks=3080/0, in_queue=3080, util=99.73% 00:18:23.077 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.077 01:46:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:23.335 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.335 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:23.593 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.593 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:23.850 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.850 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4055055 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:24.108 nvmf hotplug test: fio failed as expected 00:18:24.108 01:46:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.366 rmmod nvme_tcp 00:18:24.366 rmmod nvme_fabrics 00:18:24.366 rmmod nvme_keyring 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4053068 ']' 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4053068 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 4053068 ']' 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 4053068 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:24.366 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4053068 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4053068' 00:18:24.625 killing process with pid 4053068 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 4053068 00:18:24.625 [2024-05-15 01:46:48.314344] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 4053068 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.625 01:46:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.157 01:46:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:27.157 00:18:27.157 real 0m23.982s 00:18:27.157 user 1m22.184s 00:18:27.157 sys 0m7.109s 00:18:27.157 01:46:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:27.157 01:46:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.157 ************************************ 00:18:27.157 END TEST nvmf_fio_target 00:18:27.157 ************************************ 00:18:27.157 01:46:50 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:27.157 01:46:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:27.158 01:46:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:27.158 01:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:27.158 ************************************ 00:18:27.158 START TEST nvmf_bdevio 00:18:27.158 ************************************ 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:27.158 * Looking for test storage... 00:18:27.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.158 01:46:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:29.753 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:29.753 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:29.754 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:29.754 Found net devices under 0000:09:00.0: cvl_0_0 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:29.754 Found net devices under 0000:09:00.1: cvl_0_1 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:29.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:18:29.754 00:18:29.754 --- 10.0.0.2 ping statistics --- 00:18:29.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.754 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:18:29.754 00:18:29.754 --- 10.0.0.1 ping statistics --- 00:18:29.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.754 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4058097 00:18:29.754 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4058097 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 4058097 ']' 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.755 [2024-05-15 01:46:53.340792] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:18:29.755 [2024-05-15 01:46:53.340864] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.755 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.755 [2024-05-15 01:46:53.420995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.755 [2024-05-15 01:46:53.515607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.755 [2024-05-15 01:46:53.515669] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.755 [2024-05-15 01:46:53.515696] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.755 [2024-05-15 01:46:53.515710] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.755 [2024-05-15 01:46:53.515722] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.755 [2024-05-15 01:46:53.515813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:29.755 [2024-05-15 01:46:53.515866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:29.755 [2024-05-15 01:46:53.515915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:29.755 [2024-05-15 01:46:53.515918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:29.755 [2024-05-15 01:46:53.653686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.755 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.014 Malloc0 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:30.014 [2024-05-15 01:46:53.704772] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.014 [2024-05-15 01:46:53.705072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.014 { 00:18:30.014 "params": { 00:18:30.014 "name": "Nvme$subsystem", 00:18:30.014 "trtype": "$TEST_TRANSPORT", 00:18:30.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.014 "adrfam": "ipv4", 00:18:30.014 "trsvcid": "$NVMF_PORT", 00:18:30.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.014 "hdgst": ${hdgst:-false}, 00:18:30.014 "ddgst": ${ddgst:-false} 00:18:30.014 }, 00:18:30.014 "method": "bdev_nvme_attach_controller" 00:18:30.014 } 00:18:30.014 EOF 00:18:30.014 )") 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:30.014 01:46:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.014 "params": { 00:18:30.014 "name": "Nvme1", 00:18:30.014 "trtype": "tcp", 00:18:30.014 "traddr": "10.0.0.2", 00:18:30.014 "adrfam": "ipv4", 00:18:30.014 "trsvcid": "4420", 00:18:30.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.014 "hdgst": false, 00:18:30.014 "ddgst": false 00:18:30.014 }, 00:18:30.014 "method": "bdev_nvme_attach_controller" 00:18:30.014 }' 00:18:30.014 [2024-05-15 01:46:53.746122] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:18:30.014 [2024-05-15 01:46:53.746212] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058125 ] 00:18:30.014 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.014 [2024-05-15 01:46:53.816004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:30.014 [2024-05-15 01:46:53.902526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.014 [2024-05-15 01:46:53.902574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.014 [2024-05-15 01:46:53.902577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.272 I/O targets: 00:18:30.272 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:30.272 00:18:30.272 00:18:30.272 CUnit - A unit testing framework for C - Version 2.1-3 00:18:30.272 http://cunit.sourceforge.net/ 00:18:30.272 00:18:30.272 00:18:30.272 Suite: bdevio tests on: Nvme1n1 00:18:30.272 Test: blockdev write read block ...passed 00:18:30.272 Test: blockdev write zeroes read block ...passed 00:18:30.272 Test: blockdev write zeroes read no split ...passed 00:18:30.529 Test: blockdev write zeroes read split ...passed 00:18:30.529 Test: blockdev write zeroes read split partial ...passed 00:18:30.529 Test: blockdev reset ...[2024-05-15 01:46:54.233130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.529 [2024-05-15 01:46:54.233256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175e8c0 (9): Bad file descriptor 00:18:30.529 [2024-05-15 01:46:54.326948] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:30.529 passed 00:18:30.529 Test: blockdev write read 8 blocks ...passed 00:18:30.529 Test: blockdev write read size > 128k ...passed 00:18:30.529 Test: blockdev write read invalid size ...passed 00:18:30.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:30.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:30.529 Test: blockdev write read max offset ...passed 00:18:30.787 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:30.787 Test: blockdev writev readv 8 blocks ...passed 00:18:30.787 Test: blockdev writev readv 30 x 1block ...passed 00:18:30.787 Test: blockdev writev readv block ...passed 00:18:30.787 Test: blockdev writev readv size > 128k ...passed 00:18:30.787 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:30.787 Test: blockdev comparev and writev ...[2024-05-15 01:46:54.542768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.787 [2024-05-15 01:46:54.542806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.787 [2024-05-15 01:46:54.542831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.542848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.543183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.543209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.543241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.543259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.543595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.543620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.543641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.543658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.544000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.544024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.544046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:30.788 [2024-05-15 01:46:54.544063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.788 passed 00:18:30.788 Test: blockdev nvme passthru rw ...passed 00:18:30.788 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:46:54.627501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.788 [2024-05-15 01:46:54.627529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.627684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.788 [2024-05-15 01:46:54.627713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.627866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.788 [2024-05-15 01:46:54.627890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.788 [2024-05-15 01:46:54.628035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:30.788 [2024-05-15 01:46:54.628058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.788 passed 00:18:30.788 Test: blockdev nvme admin passthru ...passed 00:18:30.788 Test: blockdev copy ...passed 00:18:30.788 00:18:30.788 Run Summary: Type Total Ran Passed Failed Inactive 00:18:30.788 suites 1 1 n/a 0 0 00:18:30.788 tests 23 23 23 0 0 00:18:30.788 asserts 152 152 152 0 n/a 00:18:30.788 00:18:30.788 Elapsed time = 1.220 seconds 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.046 rmmod nvme_tcp 00:18:31.046 rmmod nvme_fabrics 00:18:31.046 rmmod nvme_keyring 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4058097 ']' 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4058097 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 4058097 ']' 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 4058097 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4058097 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4058097' 00:18:31.046 killing process with pid 4058097 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 4058097 00:18:31.046 [2024-05-15 01:46:54.957001] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:31.046 01:46:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 4058097 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.305 01:46:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.840 01:46:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.840 00:18:33.840 real 0m6.611s 00:18:33.840 user 0m9.683s 00:18:33.840 sys 0m2.349s 00:18:33.840 01:46:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:33.840 01:46:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 ************************************ 00:18:33.840 END TEST nvmf_bdevio 00:18:33.840 ************************************ 00:18:33.840 01:46:57 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.840 01:46:57 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:33.840 01:46:57 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:33.840 01:46:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 ************************************ 00:18:33.840 START TEST nvmf_auth_target 00:18:33.840 ************************************ 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:33.840 * Looking for test storage... 00:18:33.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 01:46:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.841 01:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:36.373 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:36.373 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:36.373 Found net devices under 0000:09:00.0: cvl_0_0 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:36.373 Found net devices under 0000:09:00.1: cvl_0_1 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:36.373 01:46:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:36.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:18:36.373 00:18:36.373 --- 10.0.0.2 ping statistics --- 00:18:36.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.373 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:18:36.373 00:18:36.373 --- 10.0.0.1 ping statistics --- 00:18:36.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.373 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:36.373 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4060607 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4060607 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 4060607 ']' 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:36.374 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=4060632 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:36.632 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f00bb293b64c9575f4a973db20165b560f25f437dbcc3292 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.81K 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f00bb293b64c9575f4a973db20165b560f25f437dbcc3292 0 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f00bb293b64c9575f4a973db20165b560f25f437dbcc3292 0 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f00bb293b64c9575f4a973db20165b560f25f437dbcc3292 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.81K 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.81K 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.81K 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9078a25f38cc56f6e824f3dbe65b129e 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UwB 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9078a25f38cc56f6e824f3dbe65b129e 1 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9078a25f38cc56f6e824f3dbe65b129e 1 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9078a25f38cc56f6e824f3dbe65b129e 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UwB 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UwB 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.UwB 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a5e626c9afcca90dc5b29ca1d61837aaa3a6b68557b55352 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dND 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a5e626c9afcca90dc5b29ca1d61837aaa3a6b68557b55352 2 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a5e626c9afcca90dc5b29ca1d61837aaa3a6b68557b55352 2 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a5e626c9afcca90dc5b29ca1d61837aaa3a6b68557b55352 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:36.633 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dND 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dND 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.dND 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d6a2e18cff08a2ca9b4f29b019d121e69533e7f89b6a8a277057d9b6c71e77e6 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uXN 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d6a2e18cff08a2ca9b4f29b019d121e69533e7f89b6a8a277057d9b6c71e77e6 3 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d6a2e18cff08a2ca9b4f29b019d121e69533e7f89b6a8a277057d9b6c71e77e6 3 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d6a2e18cff08a2ca9b4f29b019d121e69533e7f89b6a8a277057d9b6c71e77e6 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uXN 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uXN 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.uXN 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 4060607 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 4060607 ']' 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:36.891 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 4060632 /var/tmp/host.sock 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 4060632 ']' 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:37.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:37.149 01:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.407 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:37.407 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:37.407 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.81K 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.81K 00:18:37.408 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.81K 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UwB 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UwB 00:18:37.666 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UwB 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.dND 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.dND 00:18:37.924 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.dND 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uXN 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uXN 00:18:38.182 01:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uXN 00:18:38.444 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:38.444 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.444 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:38.444 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.444 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.702 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:18:38.702 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.703 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.960 00:18:38.960 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:38.960 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:38.960 01:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.217 { 00:18:39.217 "cntlid": 1, 00:18:39.217 "qid": 0, 00:18:39.217 "state": "enabled", 00:18:39.217 "listen_address": { 00:18:39.217 "trtype": "TCP", 00:18:39.217 "adrfam": "IPv4", 00:18:39.217 "traddr": "10.0.0.2", 00:18:39.217 "trsvcid": "4420" 00:18:39.217 }, 00:18:39.217 "peer_address": { 00:18:39.217 "trtype": "TCP", 00:18:39.217 "adrfam": "IPv4", 00:18:39.217 "traddr": "10.0.0.1", 00:18:39.217 "trsvcid": "47890" 00:18:39.217 }, 00:18:39.217 "auth": { 00:18:39.217 "state": "completed", 00:18:39.217 "digest": "sha256", 00:18:39.217 "dhgroup": "null" 00:18:39.217 } 00:18:39.217 } 00:18:39.217 ]' 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:39.217 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.474 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.474 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.474 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.474 01:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.405 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:40.662 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:40.919 00:18:40.919 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.919 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.919 01:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:41.176 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.176 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.176 01:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.176 01:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.176 01:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.176 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:41.176 { 00:18:41.176 "cntlid": 3, 00:18:41.176 "qid": 0, 00:18:41.176 "state": "enabled", 00:18:41.176 "listen_address": { 00:18:41.176 "trtype": "TCP", 00:18:41.176 "adrfam": "IPv4", 00:18:41.176 "traddr": "10.0.0.2", 00:18:41.176 "trsvcid": "4420" 00:18:41.176 }, 00:18:41.176 "peer_address": { 00:18:41.176 "trtype": "TCP", 00:18:41.176 "adrfam": "IPv4", 00:18:41.176 "traddr": "10.0.0.1", 00:18:41.176 "trsvcid": "47914" 00:18:41.176 }, 00:18:41.176 "auth": { 00:18:41.176 "state": "completed", 00:18:41.176 "digest": "sha256", 00:18:41.176 "dhgroup": "null" 00:18:41.176 } 00:18:41.176 } 00:18:41.176 ]' 00:18:41.177 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.433 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.690 01:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:18:42.621 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.621 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.621 01:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.621 01:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.622 01:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.622 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.622 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.622 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:42.879 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:43.136 00:18:43.136 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.136 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.136 01:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:43.394 { 00:18:43.394 "cntlid": 5, 00:18:43.394 "qid": 0, 00:18:43.394 "state": "enabled", 00:18:43.394 "listen_address": { 00:18:43.394 "trtype": "TCP", 00:18:43.394 "adrfam": "IPv4", 00:18:43.394 "traddr": "10.0.0.2", 00:18:43.394 "trsvcid": "4420" 00:18:43.394 }, 00:18:43.394 "peer_address": { 00:18:43.394 "trtype": "TCP", 00:18:43.394 "adrfam": "IPv4", 00:18:43.394 "traddr": "10.0.0.1", 00:18:43.394 "trsvcid": "47936" 00:18:43.394 }, 00:18:43.394 "auth": { 00:18:43.394 "state": "completed", 00:18:43.394 "digest": "sha256", 00:18:43.394 "dhgroup": "null" 00:18:43.394 } 00:18:43.394 } 00:18:43.394 ]' 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:43.394 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:43.650 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.651 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.651 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.907 01:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.837 01:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.402 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:45.402 { 00:18:45.402 "cntlid": 7, 00:18:45.402 "qid": 0, 00:18:45.402 "state": "enabled", 00:18:45.402 "listen_address": { 00:18:45.402 "trtype": "TCP", 00:18:45.402 "adrfam": "IPv4", 00:18:45.402 "traddr": "10.0.0.2", 00:18:45.402 "trsvcid": "4420" 00:18:45.402 }, 00:18:45.402 "peer_address": { 00:18:45.402 "trtype": "TCP", 00:18:45.402 "adrfam": "IPv4", 00:18:45.402 "traddr": "10.0.0.1", 00:18:45.402 "trsvcid": "47962" 00:18:45.402 }, 00:18:45.402 "auth": { 00:18:45.402 "state": "completed", 00:18:45.402 "digest": "sha256", 00:18:45.402 "dhgroup": "null" 00:18:45.402 } 00:18:45.402 } 00:18:45.402 ]' 00:18:45.402 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.659 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.917 01:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.878 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:47.136 01:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:47.394 00:18:47.394 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:47.394 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.394 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:47.651 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.651 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.651 01:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.651 01:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.651 01:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.651 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.651 { 00:18:47.651 "cntlid": 9, 00:18:47.651 "qid": 0, 00:18:47.651 "state": "enabled", 00:18:47.651 "listen_address": { 00:18:47.651 "trtype": "TCP", 00:18:47.651 "adrfam": "IPv4", 00:18:47.651 "traddr": "10.0.0.2", 00:18:47.651 "trsvcid": "4420" 00:18:47.651 }, 00:18:47.651 "peer_address": { 00:18:47.651 "trtype": "TCP", 00:18:47.651 "adrfam": "IPv4", 00:18:47.651 "traddr": "10.0.0.1", 00:18:47.651 "trsvcid": "47994" 00:18:47.651 }, 00:18:47.651 "auth": { 00:18:47.651 "state": "completed", 00:18:47.651 "digest": "sha256", 00:18:47.651 "dhgroup": "ffdhe2048" 00:18:47.651 } 00:18:47.651 } 00:18:47.652 ]' 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.652 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.910 01:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:48.843 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:49.100 01:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:49.357 00:18:49.357 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.357 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.357 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.615 { 00:18:49.615 "cntlid": 11, 00:18:49.615 "qid": 0, 00:18:49.615 "state": "enabled", 00:18:49.615 "listen_address": { 00:18:49.615 "trtype": "TCP", 00:18:49.615 "adrfam": "IPv4", 00:18:49.615 "traddr": "10.0.0.2", 00:18:49.615 "trsvcid": "4420" 00:18:49.615 }, 00:18:49.615 "peer_address": { 00:18:49.615 "trtype": "TCP", 00:18:49.615 "adrfam": "IPv4", 00:18:49.615 "traddr": "10.0.0.1", 00:18:49.615 "trsvcid": "50124" 00:18:49.615 }, 00:18:49.615 "auth": { 00:18:49.615 "state": "completed", 00:18:49.615 "digest": "sha256", 00:18:49.615 "dhgroup": "ffdhe2048" 00:18:49.615 } 00:18:49.615 } 00:18:49.615 ]' 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.615 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.872 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.872 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.872 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.872 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.872 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.128 01:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.060 01:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.318 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.575 00:18:51.575 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:51.575 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.575 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.832 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.832 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.832 01:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.832 01:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.832 01:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.832 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.832 { 00:18:51.832 "cntlid": 13, 00:18:51.832 "qid": 0, 00:18:51.832 "state": "enabled", 00:18:51.832 "listen_address": { 00:18:51.832 "trtype": "TCP", 00:18:51.832 "adrfam": "IPv4", 00:18:51.832 "traddr": "10.0.0.2", 00:18:51.832 "trsvcid": "4420" 00:18:51.832 }, 00:18:51.832 "peer_address": { 00:18:51.832 "trtype": "TCP", 00:18:51.832 "adrfam": "IPv4", 00:18:51.832 "traddr": "10.0.0.1", 00:18:51.832 "trsvcid": "50148" 00:18:51.832 }, 00:18:51.832 "auth": { 00:18:51.832 "state": "completed", 00:18:51.832 "digest": "sha256", 00:18:51.833 "dhgroup": "ffdhe2048" 00:18:51.833 } 00:18:51.833 } 00:18:51.833 ]' 00:18:51.833 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.833 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.833 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:52.090 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.090 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:52.090 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.090 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.090 01:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.348 01:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:18:53.281 01:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.281 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.538 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.796 00:18:53.796 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.796 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.796 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:54.054 { 00:18:54.054 "cntlid": 15, 00:18:54.054 "qid": 0, 00:18:54.054 "state": "enabled", 00:18:54.054 "listen_address": { 00:18:54.054 "trtype": "TCP", 00:18:54.054 "adrfam": "IPv4", 00:18:54.054 "traddr": "10.0.0.2", 00:18:54.054 "trsvcid": "4420" 00:18:54.054 }, 00:18:54.054 "peer_address": { 00:18:54.054 "trtype": "TCP", 00:18:54.054 "adrfam": "IPv4", 00:18:54.054 "traddr": "10.0.0.1", 00:18:54.054 "trsvcid": "50176" 00:18:54.054 }, 00:18:54.054 "auth": { 00:18:54.054 "state": "completed", 00:18:54.054 "digest": "sha256", 00:18:54.054 "dhgroup": "ffdhe2048" 00:18:54.054 } 00:18:54.054 } 00:18:54.054 ]' 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.054 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.311 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.311 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.311 01:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.312 01:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.245 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:55.503 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.504 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:56.069 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.069 01:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:56.327 { 00:18:56.327 "cntlid": 17, 00:18:56.327 "qid": 0, 00:18:56.327 "state": "enabled", 00:18:56.327 "listen_address": { 00:18:56.327 "trtype": "TCP", 00:18:56.327 "adrfam": "IPv4", 00:18:56.327 "traddr": "10.0.0.2", 00:18:56.327 "trsvcid": "4420" 00:18:56.327 }, 00:18:56.327 "peer_address": { 00:18:56.327 "trtype": "TCP", 00:18:56.327 "adrfam": "IPv4", 00:18:56.327 "traddr": "10.0.0.1", 00:18:56.327 "trsvcid": "50198" 00:18:56.327 }, 00:18:56.327 "auth": { 00:18:56.327 "state": "completed", 00:18:56.327 "digest": "sha256", 00:18:56.327 "dhgroup": "ffdhe3072" 00:18:56.327 } 00:18:56.327 } 00:18:56.327 ]' 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.327 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.585 01:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.518 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:57.775 01:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.776 01:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.776 01:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.776 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:57.776 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:58.033 00:18:58.033 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:58.033 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:58.033 01:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.290 { 00:18:58.290 "cntlid": 19, 00:18:58.290 "qid": 0, 00:18:58.290 "state": "enabled", 00:18:58.290 "listen_address": { 00:18:58.290 "trtype": "TCP", 00:18:58.290 "adrfam": "IPv4", 00:18:58.290 "traddr": "10.0.0.2", 00:18:58.290 "trsvcid": "4420" 00:18:58.290 }, 00:18:58.290 "peer_address": { 00:18:58.290 "trtype": "TCP", 00:18:58.290 "adrfam": "IPv4", 00:18:58.290 "traddr": "10.0.0.1", 00:18:58.290 "trsvcid": "39616" 00:18:58.290 }, 00:18:58.290 "auth": { 00:18:58.290 "state": "completed", 00:18:58.290 "digest": "sha256", 00:18:58.290 "dhgroup": "ffdhe3072" 00:18:58.290 } 00:18:58.290 } 00:18:58.290 ]' 00:18:58.290 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.547 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.804 01:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.738 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:59.996 01:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:00.560 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.560 { 00:19:00.560 "cntlid": 21, 00:19:00.560 "qid": 0, 00:19:00.560 "state": "enabled", 00:19:00.560 "listen_address": { 00:19:00.560 "trtype": "TCP", 00:19:00.560 "adrfam": "IPv4", 00:19:00.560 "traddr": "10.0.0.2", 00:19:00.560 "trsvcid": "4420" 00:19:00.560 }, 00:19:00.560 "peer_address": { 00:19:00.560 "trtype": "TCP", 00:19:00.560 "adrfam": "IPv4", 00:19:00.560 "traddr": "10.0.0.1", 00:19:00.560 "trsvcid": "39630" 00:19:00.560 }, 00:19:00.560 "auth": { 00:19:00.560 "state": "completed", 00:19:00.560 "digest": "sha256", 00:19:00.560 "dhgroup": "ffdhe3072" 00:19:00.560 } 00:19:00.560 } 00:19:00.560 ]' 00:19:00.560 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.817 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.817 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.817 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.817 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.818 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.818 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.818 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.075 01:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.018 01:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.330 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.588 00:19:02.588 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.588 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.588 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.847 { 00:19:02.847 "cntlid": 23, 00:19:02.847 "qid": 0, 00:19:02.847 "state": "enabled", 00:19:02.847 "listen_address": { 00:19:02.847 "trtype": "TCP", 00:19:02.847 "adrfam": "IPv4", 00:19:02.847 "traddr": "10.0.0.2", 00:19:02.847 "trsvcid": "4420" 00:19:02.847 }, 00:19:02.847 "peer_address": { 00:19:02.847 "trtype": "TCP", 00:19:02.847 "adrfam": "IPv4", 00:19:02.847 "traddr": "10.0.0.1", 00:19:02.847 "trsvcid": "39652" 00:19:02.847 }, 00:19:02.847 "auth": { 00:19:02.847 "state": "completed", 00:19:02.847 "digest": "sha256", 00:19:02.847 "dhgroup": "ffdhe3072" 00:19:02.847 } 00:19:02.847 } 00:19:02.847 ]' 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.847 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:03.104 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.104 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:03.104 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.104 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.104 01:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.362 01:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.296 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:04.553 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:05.118 00:19:05.118 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:05.118 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.118 01:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:05.118 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.118 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.118 01:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.118 01:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.376 { 00:19:05.376 "cntlid": 25, 00:19:05.376 "qid": 0, 00:19:05.376 "state": "enabled", 00:19:05.376 "listen_address": { 00:19:05.376 "trtype": "TCP", 00:19:05.376 "adrfam": "IPv4", 00:19:05.376 "traddr": "10.0.0.2", 00:19:05.376 "trsvcid": "4420" 00:19:05.376 }, 00:19:05.376 "peer_address": { 00:19:05.376 "trtype": "TCP", 00:19:05.376 "adrfam": "IPv4", 00:19:05.376 "traddr": "10.0.0.1", 00:19:05.376 "trsvcid": "39686" 00:19:05.376 }, 00:19:05.376 "auth": { 00:19:05.376 "state": "completed", 00:19:05.376 "digest": "sha256", 00:19:05.376 "dhgroup": "ffdhe4096" 00:19:05.376 } 00:19:05.376 } 00:19:05.376 ]' 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.376 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.635 01:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.569 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:06.827 01:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:07.393 00:19:07.393 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:07.393 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.393 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:07.651 { 00:19:07.651 "cntlid": 27, 00:19:07.651 "qid": 0, 00:19:07.651 "state": "enabled", 00:19:07.651 "listen_address": { 00:19:07.651 "trtype": "TCP", 00:19:07.651 "adrfam": "IPv4", 00:19:07.651 "traddr": "10.0.0.2", 00:19:07.651 "trsvcid": "4420" 00:19:07.651 }, 00:19:07.651 "peer_address": { 00:19:07.651 "trtype": "TCP", 00:19:07.651 "adrfam": "IPv4", 00:19:07.651 "traddr": "10.0.0.1", 00:19:07.651 "trsvcid": "39720" 00:19:07.651 }, 00:19:07.651 "auth": { 00:19:07.651 "state": "completed", 00:19:07.651 "digest": "sha256", 00:19:07.651 "dhgroup": "ffdhe4096" 00:19:07.651 } 00:19:07.651 } 00:19:07.651 ]' 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.651 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.908 01:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.841 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.100 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:09.101 01:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:09.666 00:19:09.666 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:09.666 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:09.666 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:09.924 { 00:19:09.924 "cntlid": 29, 00:19:09.924 "qid": 0, 00:19:09.924 "state": "enabled", 00:19:09.924 "listen_address": { 00:19:09.924 "trtype": "TCP", 00:19:09.924 "adrfam": "IPv4", 00:19:09.924 "traddr": "10.0.0.2", 00:19:09.924 "trsvcid": "4420" 00:19:09.924 }, 00:19:09.924 "peer_address": { 00:19:09.924 "trtype": "TCP", 00:19:09.924 "adrfam": "IPv4", 00:19:09.924 "traddr": "10.0.0.1", 00:19:09.924 "trsvcid": "47522" 00:19:09.924 }, 00:19:09.924 "auth": { 00:19:09.924 "state": "completed", 00:19:09.924 "digest": "sha256", 00:19:09.924 "dhgroup": "ffdhe4096" 00:19:09.924 } 00:19:09.924 } 00:19:09.924 ]' 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.924 01:47:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.181 01:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.114 01:47:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.371 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.936 00:19:11.936 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:11.936 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:11.937 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:12.194 { 00:19:12.194 "cntlid": 31, 00:19:12.194 "qid": 0, 00:19:12.194 "state": "enabled", 00:19:12.194 "listen_address": { 00:19:12.194 "trtype": "TCP", 00:19:12.194 "adrfam": "IPv4", 00:19:12.194 "traddr": "10.0.0.2", 00:19:12.194 "trsvcid": "4420" 00:19:12.194 }, 00:19:12.194 "peer_address": { 00:19:12.194 "trtype": "TCP", 00:19:12.194 "adrfam": "IPv4", 00:19:12.194 "traddr": "10.0.0.1", 00:19:12.194 "trsvcid": "47554" 00:19:12.194 }, 00:19:12.194 "auth": { 00:19:12.194 "state": "completed", 00:19:12.194 "digest": "sha256", 00:19:12.194 "dhgroup": "ffdhe4096" 00:19:12.194 } 00:19:12.194 } 00:19:12.194 ]' 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.194 01:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:12.194 01:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.194 01:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:12.194 01:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.194 01:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.194 01:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.452 01:47:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.385 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:13.643 01:47:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:14.208 00:19:14.208 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:14.208 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.208 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:14.466 { 00:19:14.466 "cntlid": 33, 00:19:14.466 "qid": 0, 00:19:14.466 "state": "enabled", 00:19:14.466 "listen_address": { 00:19:14.466 "trtype": "TCP", 00:19:14.466 "adrfam": "IPv4", 00:19:14.466 "traddr": "10.0.0.2", 00:19:14.466 "trsvcid": "4420" 00:19:14.466 }, 00:19:14.466 "peer_address": { 00:19:14.466 "trtype": "TCP", 00:19:14.466 "adrfam": "IPv4", 00:19:14.466 "traddr": "10.0.0.1", 00:19:14.466 "trsvcid": "47588" 00:19:14.466 }, 00:19:14.466 "auth": { 00:19:14.466 "state": "completed", 00:19:14.466 "digest": "sha256", 00:19:14.466 "dhgroup": "ffdhe6144" 00:19:14.466 } 00:19:14.466 } 00:19:14.466 ]' 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.466 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:14.724 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.724 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.724 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.981 01:47:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:15.915 01:47:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:16.849 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:16.849 { 00:19:16.849 "cntlid": 35, 00:19:16.849 "qid": 0, 00:19:16.849 "state": "enabled", 00:19:16.849 "listen_address": { 00:19:16.849 "trtype": "TCP", 00:19:16.849 "adrfam": "IPv4", 00:19:16.849 "traddr": "10.0.0.2", 00:19:16.849 "trsvcid": "4420" 00:19:16.849 }, 00:19:16.849 "peer_address": { 00:19:16.849 "trtype": "TCP", 00:19:16.849 "adrfam": "IPv4", 00:19:16.849 "traddr": "10.0.0.1", 00:19:16.849 "trsvcid": "47612" 00:19:16.849 }, 00:19:16.849 "auth": { 00:19:16.849 "state": "completed", 00:19:16.849 "digest": "sha256", 00:19:16.849 "dhgroup": "ffdhe6144" 00:19:16.849 } 00:19:16.849 } 00:19:16.849 ]' 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.849 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:17.107 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.107 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.107 01:47:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.365 01:47:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.297 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:18.588 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:19.154 00:19:19.154 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:19.154 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.154 01:47:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:19.412 { 00:19:19.412 "cntlid": 37, 00:19:19.412 "qid": 0, 00:19:19.412 "state": "enabled", 00:19:19.412 "listen_address": { 00:19:19.412 "trtype": "TCP", 00:19:19.412 "adrfam": "IPv4", 00:19:19.412 "traddr": "10.0.0.2", 00:19:19.412 "trsvcid": "4420" 00:19:19.412 }, 00:19:19.412 "peer_address": { 00:19:19.412 "trtype": "TCP", 00:19:19.412 "adrfam": "IPv4", 00:19:19.412 "traddr": "10.0.0.1", 00:19:19.412 "trsvcid": "43718" 00:19:19.412 }, 00:19:19.412 "auth": { 00:19:19.412 "state": "completed", 00:19:19.412 "digest": "sha256", 00:19:19.412 "dhgroup": "ffdhe6144" 00:19:19.412 } 00:19:19.412 } 00:19:19.412 ]' 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.412 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.678 01:47:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.609 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.866 01:47:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.430 00:19:21.430 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:21.430 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:21.430 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.686 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.686 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.686 01:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.686 01:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.686 01:47:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.686 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:21.686 { 00:19:21.686 "cntlid": 39, 00:19:21.686 "qid": 0, 00:19:21.686 "state": "enabled", 00:19:21.686 "listen_address": { 00:19:21.686 "trtype": "TCP", 00:19:21.686 "adrfam": "IPv4", 00:19:21.687 "traddr": "10.0.0.2", 00:19:21.687 "trsvcid": "4420" 00:19:21.687 }, 00:19:21.687 "peer_address": { 00:19:21.687 "trtype": "TCP", 00:19:21.687 "adrfam": "IPv4", 00:19:21.687 "traddr": "10.0.0.1", 00:19:21.687 "trsvcid": "43764" 00:19:21.687 }, 00:19:21.687 "auth": { 00:19:21.687 "state": "completed", 00:19:21.687 "digest": "sha256", 00:19:21.687 "dhgroup": "ffdhe6144" 00:19:21.687 } 00:19:21.687 } 00:19:21.687 ]' 00:19:21.687 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.944 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.201 01:47:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.133 01:47:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:23.391 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.323 00:19:24.323 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:24.323 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.323 01:47:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:24.323 { 00:19:24.323 "cntlid": 41, 00:19:24.323 "qid": 0, 00:19:24.323 "state": "enabled", 00:19:24.323 "listen_address": { 00:19:24.323 "trtype": "TCP", 00:19:24.323 "adrfam": "IPv4", 00:19:24.323 "traddr": "10.0.0.2", 00:19:24.323 "trsvcid": "4420" 00:19:24.323 }, 00:19:24.323 "peer_address": { 00:19:24.323 "trtype": "TCP", 00:19:24.323 "adrfam": "IPv4", 00:19:24.323 "traddr": "10.0.0.1", 00:19:24.323 "trsvcid": "43798" 00:19:24.323 }, 00:19:24.323 "auth": { 00:19:24.323 "state": "completed", 00:19:24.323 "digest": "sha256", 00:19:24.323 "dhgroup": "ffdhe8192" 00:19:24.323 } 00:19:24.323 } 00:19:24.323 ]' 00:19:24.323 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.580 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.837 01:47:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.770 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.028 01:47:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.961 00:19:26.961 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:26.961 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:26.961 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:27.220 { 00:19:27.220 "cntlid": 43, 00:19:27.220 "qid": 0, 00:19:27.220 "state": "enabled", 00:19:27.220 "listen_address": { 00:19:27.220 "trtype": "TCP", 00:19:27.220 "adrfam": "IPv4", 00:19:27.220 "traddr": "10.0.0.2", 00:19:27.220 "trsvcid": "4420" 00:19:27.220 }, 00:19:27.220 "peer_address": { 00:19:27.220 "trtype": "TCP", 00:19:27.220 "adrfam": "IPv4", 00:19:27.220 "traddr": "10.0.0.1", 00:19:27.220 "trsvcid": "43834" 00:19:27.220 }, 00:19:27.220 "auth": { 00:19:27.220 "state": "completed", 00:19:27.220 "digest": "sha256", 00:19:27.220 "dhgroup": "ffdhe8192" 00:19:27.220 } 00:19:27.220 } 00:19:27.220 ]' 00:19:27.220 01:47:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.220 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.478 01:47:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.414 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:28.672 01:47:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:29.605 00:19:29.605 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:29.605 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:29.605 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:29.863 { 00:19:29.863 "cntlid": 45, 00:19:29.863 "qid": 0, 00:19:29.863 "state": "enabled", 00:19:29.863 "listen_address": { 00:19:29.863 "trtype": "TCP", 00:19:29.863 "adrfam": "IPv4", 00:19:29.863 "traddr": "10.0.0.2", 00:19:29.863 "trsvcid": "4420" 00:19:29.863 }, 00:19:29.863 "peer_address": { 00:19:29.863 "trtype": "TCP", 00:19:29.863 "adrfam": "IPv4", 00:19:29.863 "traddr": "10.0.0.1", 00:19:29.863 "trsvcid": "58402" 00:19:29.863 }, 00:19:29.863 "auth": { 00:19:29.863 "state": "completed", 00:19:29.863 "digest": "sha256", 00:19:29.863 "dhgroup": "ffdhe8192" 00:19:29.863 } 00:19:29.863 } 00:19:29.863 ]' 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.863 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:30.121 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.121 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:30.121 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.121 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.121 01:47:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.378 01:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:31.309 01:47:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.310 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.567 01:47:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.500 00:19:32.500 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:32.500 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:32.500 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.500 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.500 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.500 01:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.501 01:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.501 01:47:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.501 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:32.501 { 00:19:32.501 "cntlid": 47, 00:19:32.501 "qid": 0, 00:19:32.501 "state": "enabled", 00:19:32.501 "listen_address": { 00:19:32.501 "trtype": "TCP", 00:19:32.501 "adrfam": "IPv4", 00:19:32.501 "traddr": "10.0.0.2", 00:19:32.501 "trsvcid": "4420" 00:19:32.501 }, 00:19:32.501 "peer_address": { 00:19:32.501 "trtype": "TCP", 00:19:32.501 "adrfam": "IPv4", 00:19:32.501 "traddr": "10.0.0.1", 00:19:32.501 "trsvcid": "58424" 00:19:32.501 }, 00:19:32.501 "auth": { 00:19:32.501 "state": "completed", 00:19:32.501 "digest": "sha256", 00:19:32.501 "dhgroup": "ffdhe8192" 00:19:32.501 } 00:19:32.501 } 00:19:32.501 ]' 00:19:32.501 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:32.501 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.501 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:32.759 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.759 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:32.759 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.759 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.759 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.016 01:47:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.949 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:34.207 01:47:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:34.465 00:19:34.465 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:34.465 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:34.465 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:34.751 { 00:19:34.751 "cntlid": 49, 00:19:34.751 "qid": 0, 00:19:34.751 "state": "enabled", 00:19:34.751 "listen_address": { 00:19:34.751 "trtype": "TCP", 00:19:34.751 "adrfam": "IPv4", 00:19:34.751 "traddr": "10.0.0.2", 00:19:34.751 "trsvcid": "4420" 00:19:34.751 }, 00:19:34.751 "peer_address": { 00:19:34.751 "trtype": "TCP", 00:19:34.751 "adrfam": "IPv4", 00:19:34.751 "traddr": "10.0.0.1", 00:19:34.751 "trsvcid": "58472" 00:19:34.751 }, 00:19:34.751 "auth": { 00:19:34.751 "state": "completed", 00:19:34.751 "digest": "sha384", 00:19:34.751 "dhgroup": "null" 00:19:34.751 } 00:19:34.751 } 00:19:34.751 ]' 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.751 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.017 01:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.949 01:47:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:36.207 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:36.465 00:19:36.465 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:36.465 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.465 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:36.723 { 00:19:36.723 "cntlid": 51, 00:19:36.723 "qid": 0, 00:19:36.723 "state": "enabled", 00:19:36.723 "listen_address": { 00:19:36.723 "trtype": "TCP", 00:19:36.723 "adrfam": "IPv4", 00:19:36.723 "traddr": "10.0.0.2", 00:19:36.723 "trsvcid": "4420" 00:19:36.723 }, 00:19:36.723 "peer_address": { 00:19:36.723 "trtype": "TCP", 00:19:36.723 "adrfam": "IPv4", 00:19:36.723 "traddr": "10.0.0.1", 00:19:36.723 "trsvcid": "58502" 00:19:36.723 }, 00:19:36.723 "auth": { 00:19:36.723 "state": "completed", 00:19:36.723 "digest": "sha384", 00:19:36.723 "dhgroup": "null" 00:19:36.723 } 00:19:36.723 } 00:19:36.723 ]' 00:19:36.723 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.981 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.239 01:48:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.173 01:48:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:38.431 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:38.689 00:19:38.689 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:38.689 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.689 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:38.947 { 00:19:38.947 "cntlid": 53, 00:19:38.947 "qid": 0, 00:19:38.947 "state": "enabled", 00:19:38.947 "listen_address": { 00:19:38.947 "trtype": "TCP", 00:19:38.947 "adrfam": "IPv4", 00:19:38.947 "traddr": "10.0.0.2", 00:19:38.947 "trsvcid": "4420" 00:19:38.947 }, 00:19:38.947 "peer_address": { 00:19:38.947 "trtype": "TCP", 00:19:38.947 "adrfam": "IPv4", 00:19:38.947 "traddr": "10.0.0.1", 00:19:38.947 "trsvcid": "48652" 00:19:38.947 }, 00:19:38.947 "auth": { 00:19:38.947 "state": "completed", 00:19:38.947 "digest": "sha384", 00:19:38.947 "dhgroup": "null" 00:19:38.947 } 00:19:38.947 } 00:19:38.947 ]' 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:38.947 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:39.205 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.205 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.205 01:48:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.205 01:48:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:40.136 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.394 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.960 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.960 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:41.218 { 00:19:41.218 "cntlid": 55, 00:19:41.218 "qid": 0, 00:19:41.218 "state": "enabled", 00:19:41.218 "listen_address": { 00:19:41.218 "trtype": "TCP", 00:19:41.218 "adrfam": "IPv4", 00:19:41.218 "traddr": "10.0.0.2", 00:19:41.218 "trsvcid": "4420" 00:19:41.218 }, 00:19:41.218 "peer_address": { 00:19:41.218 "trtype": "TCP", 00:19:41.218 "adrfam": "IPv4", 00:19:41.218 "traddr": "10.0.0.1", 00:19:41.218 "trsvcid": "48688" 00:19:41.218 }, 00:19:41.218 "auth": { 00:19:41.218 "state": "completed", 00:19:41.218 "digest": "sha384", 00:19:41.218 "dhgroup": "null" 00:19:41.218 } 00:19:41.218 } 00:19:41.218 ]' 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:41.218 01:48:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:41.218 01:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.218 01:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.218 01:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.476 01:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.408 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:42.666 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:42.924 00:19:42.924 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:42.924 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:42.924 01:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:43.182 { 00:19:43.182 "cntlid": 57, 00:19:43.182 "qid": 0, 00:19:43.182 "state": "enabled", 00:19:43.182 "listen_address": { 00:19:43.182 "trtype": "TCP", 00:19:43.182 "adrfam": "IPv4", 00:19:43.182 "traddr": "10.0.0.2", 00:19:43.182 "trsvcid": "4420" 00:19:43.182 }, 00:19:43.182 "peer_address": { 00:19:43.182 "trtype": "TCP", 00:19:43.182 "adrfam": "IPv4", 00:19:43.182 "traddr": "10.0.0.1", 00:19:43.182 "trsvcid": "48720" 00:19:43.182 }, 00:19:43.182 "auth": { 00:19:43.182 "state": "completed", 00:19:43.182 "digest": "sha384", 00:19:43.182 "dhgroup": "ffdhe2048" 00:19:43.182 } 00:19:43.182 } 00:19:43.182 ]' 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.182 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:43.441 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.441 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.441 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.699 01:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.632 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:44.890 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:45.147 00:19:45.147 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:45.147 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:45.147 01:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:45.404 { 00:19:45.404 "cntlid": 59, 00:19:45.404 "qid": 0, 00:19:45.404 "state": "enabled", 00:19:45.404 "listen_address": { 00:19:45.404 "trtype": "TCP", 00:19:45.404 "adrfam": "IPv4", 00:19:45.404 "traddr": "10.0.0.2", 00:19:45.404 "trsvcid": "4420" 00:19:45.404 }, 00:19:45.404 "peer_address": { 00:19:45.404 "trtype": "TCP", 00:19:45.404 "adrfam": "IPv4", 00:19:45.404 "traddr": "10.0.0.1", 00:19:45.404 "trsvcid": "48748" 00:19:45.404 }, 00:19:45.404 "auth": { 00:19:45.404 "state": "completed", 00:19:45.404 "digest": "sha384", 00:19:45.404 "dhgroup": "ffdhe2048" 00:19:45.404 } 00:19:45.404 } 00:19:45.404 ]' 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.404 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.661 01:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.590 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.847 01:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.105 00:19:47.105 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:47.105 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:47.105 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:47.363 { 00:19:47.363 "cntlid": 61, 00:19:47.363 "qid": 0, 00:19:47.363 "state": "enabled", 00:19:47.363 "listen_address": { 00:19:47.363 "trtype": "TCP", 00:19:47.363 "adrfam": "IPv4", 00:19:47.363 "traddr": "10.0.0.2", 00:19:47.363 "trsvcid": "4420" 00:19:47.363 }, 00:19:47.363 "peer_address": { 00:19:47.363 "trtype": "TCP", 00:19:47.363 "adrfam": "IPv4", 00:19:47.363 "traddr": "10.0.0.1", 00:19:47.363 "trsvcid": "48772" 00:19:47.363 }, 00:19:47.363 "auth": { 00:19:47.363 "state": "completed", 00:19:47.363 "digest": "sha384", 00:19:47.363 "dhgroup": "ffdhe2048" 00:19:47.363 } 00:19:47.363 } 00:19:47.363 ]' 00:19:47.363 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.621 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.879 01:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.812 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.071 01:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.330 00:19:49.330 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:49.330 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:49.330 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:49.589 { 00:19:49.589 "cntlid": 63, 00:19:49.589 "qid": 0, 00:19:49.589 "state": "enabled", 00:19:49.589 "listen_address": { 00:19:49.589 "trtype": "TCP", 00:19:49.589 "adrfam": "IPv4", 00:19:49.589 "traddr": "10.0.0.2", 00:19:49.589 "trsvcid": "4420" 00:19:49.589 }, 00:19:49.589 "peer_address": { 00:19:49.589 "trtype": "TCP", 00:19:49.589 "adrfam": "IPv4", 00:19:49.589 "traddr": "10.0.0.1", 00:19:49.589 "trsvcid": "56918" 00:19:49.589 }, 00:19:49.589 "auth": { 00:19:49.589 "state": "completed", 00:19:49.589 "digest": "sha384", 00:19:49.589 "dhgroup": "ffdhe2048" 00:19:49.589 } 00:19:49.589 } 00:19:49.589 ]' 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.589 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.847 01:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.787 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.081 01:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.082 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.082 01:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.339 00:19:51.597 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:51.597 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:51.597 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:51.598 { 00:19:51.598 "cntlid": 65, 00:19:51.598 "qid": 0, 00:19:51.598 "state": "enabled", 00:19:51.598 "listen_address": { 00:19:51.598 "trtype": "TCP", 00:19:51.598 "adrfam": "IPv4", 00:19:51.598 "traddr": "10.0.0.2", 00:19:51.598 "trsvcid": "4420" 00:19:51.598 }, 00:19:51.598 "peer_address": { 00:19:51.598 "trtype": "TCP", 00:19:51.598 "adrfam": "IPv4", 00:19:51.598 "traddr": "10.0.0.1", 00:19:51.598 "trsvcid": "56952" 00:19:51.598 }, 00:19:51.598 "auth": { 00:19:51.598 "state": "completed", 00:19:51.598 "digest": "sha384", 00:19:51.598 "dhgroup": "ffdhe3072" 00:19:51.598 } 00:19:51.598 } 00:19:51.598 ]' 00:19:51.598 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.856 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.113 01:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.052 01:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:19:53.309 01:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.310 01:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.310 01:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.310 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:53.310 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:53.566 00:19:53.566 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:53.566 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.566 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:53.823 { 00:19:53.823 "cntlid": 67, 00:19:53.823 "qid": 0, 00:19:53.823 "state": "enabled", 00:19:53.823 "listen_address": { 00:19:53.823 "trtype": "TCP", 00:19:53.823 "adrfam": "IPv4", 00:19:53.823 "traddr": "10.0.0.2", 00:19:53.823 "trsvcid": "4420" 00:19:53.823 }, 00:19:53.823 "peer_address": { 00:19:53.823 "trtype": "TCP", 00:19:53.823 "adrfam": "IPv4", 00:19:53.823 "traddr": "10.0.0.1", 00:19:53.823 "trsvcid": "56970" 00:19:53.823 }, 00:19:53.823 "auth": { 00:19:53.823 "state": "completed", 00:19:53.823 "digest": "sha384", 00:19:53.823 "dhgroup": "ffdhe3072" 00:19:53.823 } 00:19:53.823 } 00:19:53.823 ]' 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.823 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.081 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.081 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.081 01:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.338 01:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.270 01:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.270 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.834 00:19:55.834 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:55.834 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:55.834 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.092 { 00:19:56.092 "cntlid": 69, 00:19:56.092 "qid": 0, 00:19:56.092 "state": "enabled", 00:19:56.092 "listen_address": { 00:19:56.092 "trtype": "TCP", 00:19:56.092 "adrfam": "IPv4", 00:19:56.092 "traddr": "10.0.0.2", 00:19:56.092 "trsvcid": "4420" 00:19:56.092 }, 00:19:56.092 "peer_address": { 00:19:56.092 "trtype": "TCP", 00:19:56.092 "adrfam": "IPv4", 00:19:56.092 "traddr": "10.0.0.1", 00:19:56.092 "trsvcid": "57006" 00:19:56.092 }, 00:19:56.092 "auth": { 00:19:56.092 "state": "completed", 00:19:56.092 "digest": "sha384", 00:19:56.092 "dhgroup": "ffdhe3072" 00:19:56.092 } 00:19:56.092 } 00:19:56.092 ]' 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.092 01:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.349 01:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.280 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.537 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.538 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.795 00:19:57.795 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:57.795 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:57.795 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:58.053 { 00:19:58.053 "cntlid": 71, 00:19:58.053 "qid": 0, 00:19:58.053 "state": "enabled", 00:19:58.053 "listen_address": { 00:19:58.053 "trtype": "TCP", 00:19:58.053 "adrfam": "IPv4", 00:19:58.053 "traddr": "10.0.0.2", 00:19:58.053 "trsvcid": "4420" 00:19:58.053 }, 00:19:58.053 "peer_address": { 00:19:58.053 "trtype": "TCP", 00:19:58.053 "adrfam": "IPv4", 00:19:58.053 "traddr": "10.0.0.1", 00:19:58.053 "trsvcid": "35608" 00:19:58.053 }, 00:19:58.053 "auth": { 00:19:58.053 "state": "completed", 00:19:58.053 "digest": "sha384", 00:19:58.053 "dhgroup": "ffdhe3072" 00:19:58.053 } 00:19:58.053 } 00:19:58.053 ]' 00:19:58.053 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:58.310 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.310 01:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:58.310 01:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.310 01:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:58.310 01:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.310 01:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.310 01:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.568 01:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.502 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:59.761 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:00.019 00:20:00.019 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:00.019 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.019 01:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.276 { 00:20:00.276 "cntlid": 73, 00:20:00.276 "qid": 0, 00:20:00.276 "state": "enabled", 00:20:00.276 "listen_address": { 00:20:00.276 "trtype": "TCP", 00:20:00.276 "adrfam": "IPv4", 00:20:00.276 "traddr": "10.0.0.2", 00:20:00.276 "trsvcid": "4420" 00:20:00.276 }, 00:20:00.276 "peer_address": { 00:20:00.276 "trtype": "TCP", 00:20:00.276 "adrfam": "IPv4", 00:20:00.276 "traddr": "10.0.0.1", 00:20:00.276 "trsvcid": "35650" 00:20:00.276 }, 00:20:00.276 "auth": { 00:20:00.276 "state": "completed", 00:20:00.276 "digest": "sha384", 00:20:00.276 "dhgroup": "ffdhe4096" 00:20:00.276 } 00:20:00.276 } 00:20:00.276 ]' 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.276 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.534 01:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:01.468 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.468 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.468 01:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.468 01:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:01.726 01:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:02.291 00:20:02.291 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:02.291 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:02.291 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:02.549 { 00:20:02.549 "cntlid": 75, 00:20:02.549 "qid": 0, 00:20:02.549 "state": "enabled", 00:20:02.549 "listen_address": { 00:20:02.549 "trtype": "TCP", 00:20:02.549 "adrfam": "IPv4", 00:20:02.549 "traddr": "10.0.0.2", 00:20:02.549 "trsvcid": "4420" 00:20:02.549 }, 00:20:02.549 "peer_address": { 00:20:02.549 "trtype": "TCP", 00:20:02.549 "adrfam": "IPv4", 00:20:02.549 "traddr": "10.0.0.1", 00:20:02.549 "trsvcid": "35678" 00:20:02.549 }, 00:20:02.549 "auth": { 00:20:02.549 "state": "completed", 00:20:02.549 "digest": "sha384", 00:20:02.549 "dhgroup": "ffdhe4096" 00:20:02.549 } 00:20:02.549 } 00:20:02.549 ]' 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.549 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.807 01:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.741 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:03.999 01:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.564 00:20:04.564 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:04.564 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:04.564 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:04.822 { 00:20:04.822 "cntlid": 77, 00:20:04.822 "qid": 0, 00:20:04.822 "state": "enabled", 00:20:04.822 "listen_address": { 00:20:04.822 "trtype": "TCP", 00:20:04.822 "adrfam": "IPv4", 00:20:04.822 "traddr": "10.0.0.2", 00:20:04.822 "trsvcid": "4420" 00:20:04.822 }, 00:20:04.822 "peer_address": { 00:20:04.822 "trtype": "TCP", 00:20:04.822 "adrfam": "IPv4", 00:20:04.822 "traddr": "10.0.0.1", 00:20:04.822 "trsvcid": "35704" 00:20:04.822 }, 00:20:04.822 "auth": { 00:20:04.822 "state": "completed", 00:20:04.822 "digest": "sha384", 00:20:04.822 "dhgroup": "ffdhe4096" 00:20:04.822 } 00:20:04.822 } 00:20:04.822 ]' 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.822 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.079 01:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:20:06.012 01:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.012 01:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.012 01:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.012 01:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.271 01:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.271 01:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:06.271 01:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.271 01:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.533 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.841 00:20:06.841 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:06.841 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.841 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:07.098 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.098 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:07.099 { 00:20:07.099 "cntlid": 79, 00:20:07.099 "qid": 0, 00:20:07.099 "state": "enabled", 00:20:07.099 "listen_address": { 00:20:07.099 "trtype": "TCP", 00:20:07.099 "adrfam": "IPv4", 00:20:07.099 "traddr": "10.0.0.2", 00:20:07.099 "trsvcid": "4420" 00:20:07.099 }, 00:20:07.099 "peer_address": { 00:20:07.099 "trtype": "TCP", 00:20:07.099 "adrfam": "IPv4", 00:20:07.099 "traddr": "10.0.0.1", 00:20:07.099 "trsvcid": "35740" 00:20:07.099 }, 00:20:07.099 "auth": { 00:20:07.099 "state": "completed", 00:20:07.099 "digest": "sha384", 00:20:07.099 "dhgroup": "ffdhe4096" 00:20:07.099 } 00:20:07.099 } 00:20:07.099 ]' 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.099 01:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.356 01:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.288 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.546 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:09.111 00:20:09.111 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:09.111 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:09.111 01:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:09.368 { 00:20:09.368 "cntlid": 81, 00:20:09.368 "qid": 0, 00:20:09.368 "state": "enabled", 00:20:09.368 "listen_address": { 00:20:09.368 "trtype": "TCP", 00:20:09.368 "adrfam": "IPv4", 00:20:09.368 "traddr": "10.0.0.2", 00:20:09.368 "trsvcid": "4420" 00:20:09.368 }, 00:20:09.368 "peer_address": { 00:20:09.368 "trtype": "TCP", 00:20:09.368 "adrfam": "IPv4", 00:20:09.368 "traddr": "10.0.0.1", 00:20:09.368 "trsvcid": "49640" 00:20:09.368 }, 00:20:09.368 "auth": { 00:20:09.368 "state": "completed", 00:20:09.368 "digest": "sha384", 00:20:09.368 "dhgroup": "ffdhe6144" 00:20:09.368 } 00:20:09.368 } 00:20:09.368 ]' 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.368 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:09.626 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.626 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:09.626 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.626 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.626 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.884 01:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:10.818 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.819 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:11.077 01:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:11.642 00:20:11.642 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:11.642 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.642 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:11.900 { 00:20:11.900 "cntlid": 83, 00:20:11.900 "qid": 0, 00:20:11.900 "state": "enabled", 00:20:11.900 "listen_address": { 00:20:11.900 "trtype": "TCP", 00:20:11.900 "adrfam": "IPv4", 00:20:11.900 "traddr": "10.0.0.2", 00:20:11.900 "trsvcid": "4420" 00:20:11.900 }, 00:20:11.900 "peer_address": { 00:20:11.900 "trtype": "TCP", 00:20:11.900 "adrfam": "IPv4", 00:20:11.900 "traddr": "10.0.0.1", 00:20:11.900 "trsvcid": "49676" 00:20:11.900 }, 00:20:11.900 "auth": { 00:20:11.900 "state": "completed", 00:20:11.900 "digest": "sha384", 00:20:11.900 "dhgroup": "ffdhe6144" 00:20:11.900 } 00:20:11.900 } 00:20:11.900 ]' 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.900 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.158 01:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.091 01:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:13.349 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:13.913 00:20:13.913 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:13.913 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.913 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:14.171 { 00:20:14.171 "cntlid": 85, 00:20:14.171 "qid": 0, 00:20:14.171 "state": "enabled", 00:20:14.171 "listen_address": { 00:20:14.171 "trtype": "TCP", 00:20:14.171 "adrfam": "IPv4", 00:20:14.171 "traddr": "10.0.0.2", 00:20:14.171 "trsvcid": "4420" 00:20:14.171 }, 00:20:14.171 "peer_address": { 00:20:14.171 "trtype": "TCP", 00:20:14.171 "adrfam": "IPv4", 00:20:14.171 "traddr": "10.0.0.1", 00:20:14.171 "trsvcid": "49702" 00:20:14.171 }, 00:20:14.171 "auth": { 00:20:14.171 "state": "completed", 00:20:14.171 "digest": "sha384", 00:20:14.171 "dhgroup": "ffdhe6144" 00:20:14.171 } 00:20:14.171 } 00:20:14.171 ]' 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.171 01:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:14.171 01:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.171 01:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:14.171 01:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.171 01:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.171 01:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.429 01:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.361 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.619 01:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.186 00:20:16.186 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:16.186 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:16.186 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.444 { 00:20:16.444 "cntlid": 87, 00:20:16.444 "qid": 0, 00:20:16.444 "state": "enabled", 00:20:16.444 "listen_address": { 00:20:16.444 "trtype": "TCP", 00:20:16.444 "adrfam": "IPv4", 00:20:16.444 "traddr": "10.0.0.2", 00:20:16.444 "trsvcid": "4420" 00:20:16.444 }, 00:20:16.444 "peer_address": { 00:20:16.444 "trtype": "TCP", 00:20:16.444 "adrfam": "IPv4", 00:20:16.444 "traddr": "10.0.0.1", 00:20:16.444 "trsvcid": "49738" 00:20:16.444 }, 00:20:16.444 "auth": { 00:20:16.444 "state": "completed", 00:20:16.444 "digest": "sha384", 00:20:16.444 "dhgroup": "ffdhe6144" 00:20:16.444 } 00:20:16.444 } 00:20:16.444 ]' 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.444 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:16.702 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.702 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:16.702 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.702 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.702 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.959 01:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.893 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:18.152 01:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:19.085 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.085 01:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:19.085 { 00:20:19.085 "cntlid": 89, 00:20:19.085 "qid": 0, 00:20:19.085 "state": "enabled", 00:20:19.085 "listen_address": { 00:20:19.085 "trtype": "TCP", 00:20:19.085 "adrfam": "IPv4", 00:20:19.085 "traddr": "10.0.0.2", 00:20:19.085 "trsvcid": "4420" 00:20:19.085 }, 00:20:19.085 "peer_address": { 00:20:19.085 "trtype": "TCP", 00:20:19.085 "adrfam": "IPv4", 00:20:19.085 "traddr": "10.0.0.1", 00:20:19.085 "trsvcid": "44470" 00:20:19.085 }, 00:20:19.085 "auth": { 00:20:19.085 "state": "completed", 00:20:19.085 "digest": "sha384", 00:20:19.085 "dhgroup": "ffdhe8192" 00:20:19.085 } 00:20:19.085 } 00:20:19.085 ]' 00:20:19.085 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.342 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.600 01:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.531 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:20.789 01:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:21.723 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:21.723 { 00:20:21.723 "cntlid": 91, 00:20:21.723 "qid": 0, 00:20:21.723 "state": "enabled", 00:20:21.723 "listen_address": { 00:20:21.723 "trtype": "TCP", 00:20:21.723 "adrfam": "IPv4", 00:20:21.723 "traddr": "10.0.0.2", 00:20:21.723 "trsvcid": "4420" 00:20:21.723 }, 00:20:21.723 "peer_address": { 00:20:21.723 "trtype": "TCP", 00:20:21.723 "adrfam": "IPv4", 00:20:21.723 "traddr": "10.0.0.1", 00:20:21.723 "trsvcid": "44500" 00:20:21.723 }, 00:20:21.723 "auth": { 00:20:21.723 "state": "completed", 00:20:21.723 "digest": "sha384", 00:20:21.723 "dhgroup": "ffdhe8192" 00:20:21.723 } 00:20:21.723 } 00:20:21.723 ]' 00:20:21.723 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.981 01:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.239 01:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:23.229 01:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.230 01:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:23.520 01:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.451 00:20:24.451 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:24.451 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:24.451 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:24.709 { 00:20:24.709 "cntlid": 93, 00:20:24.709 "qid": 0, 00:20:24.709 "state": "enabled", 00:20:24.709 "listen_address": { 00:20:24.709 "trtype": "TCP", 00:20:24.709 "adrfam": "IPv4", 00:20:24.709 "traddr": "10.0.0.2", 00:20:24.709 "trsvcid": "4420" 00:20:24.709 }, 00:20:24.709 "peer_address": { 00:20:24.709 "trtype": "TCP", 00:20:24.709 "adrfam": "IPv4", 00:20:24.709 "traddr": "10.0.0.1", 00:20:24.709 "trsvcid": "44532" 00:20:24.709 }, 00:20:24.709 "auth": { 00:20:24.709 "state": "completed", 00:20:24.709 "digest": "sha384", 00:20:24.709 "dhgroup": "ffdhe8192" 00:20:24.709 } 00:20:24.709 } 00:20:24.709 ]' 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.709 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.968 01:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:25.900 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.158 01:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.090 00:20:27.090 01:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.090 01:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.090 01:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.347 { 00:20:27.347 "cntlid": 95, 00:20:27.347 "qid": 0, 00:20:27.347 "state": "enabled", 00:20:27.347 "listen_address": { 00:20:27.347 "trtype": "TCP", 00:20:27.347 "adrfam": "IPv4", 00:20:27.347 "traddr": "10.0.0.2", 00:20:27.347 "trsvcid": "4420" 00:20:27.347 }, 00:20:27.347 "peer_address": { 00:20:27.347 "trtype": "TCP", 00:20:27.347 "adrfam": "IPv4", 00:20:27.347 "traddr": "10.0.0.1", 00:20:27.347 "trsvcid": "44566" 00:20:27.347 }, 00:20:27.347 "auth": { 00:20:27.347 "state": "completed", 00:20:27.347 "digest": "sha384", 00:20:27.347 "dhgroup": "ffdhe8192" 00:20:27.347 } 00:20:27.347 } 00:20:27.347 ]' 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.347 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.605 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.605 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.605 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.862 01:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.795 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:29.053 01:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:29.311 00:20:29.311 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:29.311 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:29.311 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:29.569 { 00:20:29.569 "cntlid": 97, 00:20:29.569 "qid": 0, 00:20:29.569 "state": "enabled", 00:20:29.569 "listen_address": { 00:20:29.569 "trtype": "TCP", 00:20:29.569 "adrfam": "IPv4", 00:20:29.569 "traddr": "10.0.0.2", 00:20:29.569 "trsvcid": "4420" 00:20:29.569 }, 00:20:29.569 "peer_address": { 00:20:29.569 "trtype": "TCP", 00:20:29.569 "adrfam": "IPv4", 00:20:29.569 "traddr": "10.0.0.1", 00:20:29.569 "trsvcid": "58370" 00:20:29.569 }, 00:20:29.569 "auth": { 00:20:29.569 "state": "completed", 00:20:29.569 "digest": "sha512", 00:20:29.569 "dhgroup": "null" 00:20:29.569 } 00:20:29.569 } 00:20:29.569 ]' 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:29.569 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:29.826 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.826 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.826 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.083 01:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.016 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:31.274 01:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:31.532 00:20:31.532 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.532 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:31.532 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:31.790 { 00:20:31.790 "cntlid": 99, 00:20:31.790 "qid": 0, 00:20:31.790 "state": "enabled", 00:20:31.790 "listen_address": { 00:20:31.790 "trtype": "TCP", 00:20:31.790 "adrfam": "IPv4", 00:20:31.790 "traddr": "10.0.0.2", 00:20:31.790 "trsvcid": "4420" 00:20:31.790 }, 00:20:31.790 "peer_address": { 00:20:31.790 "trtype": "TCP", 00:20:31.790 "adrfam": "IPv4", 00:20:31.790 "traddr": "10.0.0.1", 00:20:31.790 "trsvcid": "58396" 00:20:31.790 }, 00:20:31.790 "auth": { 00:20:31.790 "state": "completed", 00:20:31.790 "digest": "sha512", 00:20:31.790 "dhgroup": "null" 00:20:31.790 } 00:20:31.790 } 00:20:31.790 ]' 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.790 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.048 01:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:32.981 01:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:33.238 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:33.803 00:20:33.803 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:33.804 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:33.804 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:34.061 { 00:20:34.061 "cntlid": 101, 00:20:34.061 "qid": 0, 00:20:34.061 "state": "enabled", 00:20:34.061 "listen_address": { 00:20:34.061 "trtype": "TCP", 00:20:34.061 "adrfam": "IPv4", 00:20:34.061 "traddr": "10.0.0.2", 00:20:34.061 "trsvcid": "4420" 00:20:34.061 }, 00:20:34.061 "peer_address": { 00:20:34.061 "trtype": "TCP", 00:20:34.061 "adrfam": "IPv4", 00:20:34.061 "traddr": "10.0.0.1", 00:20:34.061 "trsvcid": "58422" 00:20:34.061 }, 00:20:34.061 "auth": { 00:20:34.061 "state": "completed", 00:20:34.061 "digest": "sha512", 00:20:34.061 "dhgroup": "null" 00:20:34.061 } 00:20:34.061 } 00:20:34.061 ]' 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.061 01:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.319 01:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.252 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.511 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.769 00:20:35.769 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:35.769 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:35.769 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:36.027 { 00:20:36.027 "cntlid": 103, 00:20:36.027 "qid": 0, 00:20:36.027 "state": "enabled", 00:20:36.027 "listen_address": { 00:20:36.027 "trtype": "TCP", 00:20:36.027 "adrfam": "IPv4", 00:20:36.027 "traddr": "10.0.0.2", 00:20:36.027 "trsvcid": "4420" 00:20:36.027 }, 00:20:36.027 "peer_address": { 00:20:36.027 "trtype": "TCP", 00:20:36.027 "adrfam": "IPv4", 00:20:36.027 "traddr": "10.0.0.1", 00:20:36.027 "trsvcid": "58442" 00:20:36.027 }, 00:20:36.027 "auth": { 00:20:36.027 "state": "completed", 00:20:36.027 "digest": "sha512", 00:20:36.027 "dhgroup": "null" 00:20:36.027 } 00:20:36.027 } 00:20:36.027 ]' 00:20:36.027 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:36.286 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.286 01:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:36.286 01:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:36.286 01:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:36.286 01:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.286 01:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.286 01:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.544 01:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.477 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:37.735 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:37.993 00:20:37.993 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:37.993 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:37.993 01:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:38.251 { 00:20:38.251 "cntlid": 105, 00:20:38.251 "qid": 0, 00:20:38.251 "state": "enabled", 00:20:38.251 "listen_address": { 00:20:38.251 "trtype": "TCP", 00:20:38.251 "adrfam": "IPv4", 00:20:38.251 "traddr": "10.0.0.2", 00:20:38.251 "trsvcid": "4420" 00:20:38.251 }, 00:20:38.251 "peer_address": { 00:20:38.251 "trtype": "TCP", 00:20:38.251 "adrfam": "IPv4", 00:20:38.251 "traddr": "10.0.0.1", 00:20:38.251 "trsvcid": "60288" 00:20:38.251 }, 00:20:38.251 "auth": { 00:20:38.251 "state": "completed", 00:20:38.251 "digest": "sha512", 00:20:38.251 "dhgroup": "ffdhe2048" 00:20:38.251 } 00:20:38.251 } 00:20:38.251 ]' 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.251 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:38.508 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.508 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.508 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.766 01:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:39.731 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:40.296 00:20:40.296 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:40.296 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:40.296 01:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:40.296 { 00:20:40.296 "cntlid": 107, 00:20:40.296 "qid": 0, 00:20:40.296 "state": "enabled", 00:20:40.296 "listen_address": { 00:20:40.296 "trtype": "TCP", 00:20:40.296 "adrfam": "IPv4", 00:20:40.296 "traddr": "10.0.0.2", 00:20:40.296 "trsvcid": "4420" 00:20:40.296 }, 00:20:40.296 "peer_address": { 00:20:40.296 "trtype": "TCP", 00:20:40.296 "adrfam": "IPv4", 00:20:40.296 "traddr": "10.0.0.1", 00:20:40.296 "trsvcid": "60316" 00:20:40.296 }, 00:20:40.296 "auth": { 00:20:40.296 "state": "completed", 00:20:40.296 "digest": "sha512", 00:20:40.296 "dhgroup": "ffdhe2048" 00:20:40.296 } 00:20:40.296 } 00:20:40.296 ]' 00:20:40.296 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.553 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.811 01:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.745 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:42.003 01:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:42.261 00:20:42.261 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:42.261 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.261 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:42.519 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.519 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.519 01:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.520 01:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.520 01:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.520 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:42.520 { 00:20:42.520 "cntlid": 109, 00:20:42.520 "qid": 0, 00:20:42.520 "state": "enabled", 00:20:42.520 "listen_address": { 00:20:42.520 "trtype": "TCP", 00:20:42.520 "adrfam": "IPv4", 00:20:42.520 "traddr": "10.0.0.2", 00:20:42.520 "trsvcid": "4420" 00:20:42.520 }, 00:20:42.520 "peer_address": { 00:20:42.520 "trtype": "TCP", 00:20:42.520 "adrfam": "IPv4", 00:20:42.520 "traddr": "10.0.0.1", 00:20:42.520 "trsvcid": "60350" 00:20:42.520 }, 00:20:42.520 "auth": { 00:20:42.520 "state": "completed", 00:20:42.520 "digest": "sha512", 00:20:42.520 "dhgroup": "ffdhe2048" 00:20:42.520 } 00:20:42.520 } 00:20:42.520 ]' 00:20:42.520 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:42.520 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.520 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:42.777 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.777 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:42.777 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.777 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.777 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.035 01:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.969 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.227 01:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.485 00:20:44.485 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:44.485 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.485 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:44.743 { 00:20:44.743 "cntlid": 111, 00:20:44.743 "qid": 0, 00:20:44.743 "state": "enabled", 00:20:44.743 "listen_address": { 00:20:44.743 "trtype": "TCP", 00:20:44.743 "adrfam": "IPv4", 00:20:44.743 "traddr": "10.0.0.2", 00:20:44.743 "trsvcid": "4420" 00:20:44.743 }, 00:20:44.743 "peer_address": { 00:20:44.743 "trtype": "TCP", 00:20:44.743 "adrfam": "IPv4", 00:20:44.743 "traddr": "10.0.0.1", 00:20:44.743 "trsvcid": "60386" 00:20:44.743 }, 00:20:44.743 "auth": { 00:20:44.743 "state": "completed", 00:20:44.743 "digest": "sha512", 00:20:44.743 "dhgroup": "ffdhe2048" 00:20:44.743 } 00:20:44.743 } 00:20:44.743 ]' 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.743 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.000 01:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.936 01:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:46.194 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:46.758 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.758 01:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:47.015 { 00:20:47.015 "cntlid": 113, 00:20:47.015 "qid": 0, 00:20:47.015 "state": "enabled", 00:20:47.015 "listen_address": { 00:20:47.015 "trtype": "TCP", 00:20:47.015 "adrfam": "IPv4", 00:20:47.015 "traddr": "10.0.0.2", 00:20:47.015 "trsvcid": "4420" 00:20:47.015 }, 00:20:47.015 "peer_address": { 00:20:47.015 "trtype": "TCP", 00:20:47.015 "adrfam": "IPv4", 00:20:47.015 "traddr": "10.0.0.1", 00:20:47.015 "trsvcid": "60416" 00:20:47.015 }, 00:20:47.015 "auth": { 00:20:47.015 "state": "completed", 00:20:47.015 "digest": "sha512", 00:20:47.015 "dhgroup": "ffdhe3072" 00:20:47.015 } 00:20:47.015 } 00:20:47.015 ]' 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.015 01:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.272 01:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.206 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.464 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:49.031 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:49.031 { 00:20:49.031 "cntlid": 115, 00:20:49.031 "qid": 0, 00:20:49.031 "state": "enabled", 00:20:49.031 "listen_address": { 00:20:49.031 "trtype": "TCP", 00:20:49.031 "adrfam": "IPv4", 00:20:49.031 "traddr": "10.0.0.2", 00:20:49.031 "trsvcid": "4420" 00:20:49.031 }, 00:20:49.031 "peer_address": { 00:20:49.031 "trtype": "TCP", 00:20:49.031 "adrfam": "IPv4", 00:20:49.031 "traddr": "10.0.0.1", 00:20:49.031 "trsvcid": "42140" 00:20:49.031 }, 00:20:49.031 "auth": { 00:20:49.031 "state": "completed", 00:20:49.031 "digest": "sha512", 00:20:49.031 "dhgroup": "ffdhe3072" 00:20:49.031 } 00:20:49.031 } 00:20:49.031 ]' 00:20:49.031 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:49.288 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.288 01:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:49.288 01:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.288 01:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:49.288 01:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.288 01:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.288 01:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.546 01:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.479 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.737 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.995 00:20:50.995 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:50.995 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:50.995 01:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.253 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.253 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.253 01:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.253 01:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:51.511 { 00:20:51.511 "cntlid": 117, 00:20:51.511 "qid": 0, 00:20:51.511 "state": "enabled", 00:20:51.511 "listen_address": { 00:20:51.511 "trtype": "TCP", 00:20:51.511 "adrfam": "IPv4", 00:20:51.511 "traddr": "10.0.0.2", 00:20:51.511 "trsvcid": "4420" 00:20:51.511 }, 00:20:51.511 "peer_address": { 00:20:51.511 "trtype": "TCP", 00:20:51.511 "adrfam": "IPv4", 00:20:51.511 "traddr": "10.0.0.1", 00:20:51.511 "trsvcid": "42166" 00:20:51.511 }, 00:20:51.511 "auth": { 00:20:51.511 "state": "completed", 00:20:51.511 "digest": "sha512", 00:20:51.511 "dhgroup": "ffdhe3072" 00:20:51.511 } 00:20:51.511 } 00:20:51.511 ]' 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.511 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.769 01:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:52.702 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.703 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.960 01:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.218 00:20:53.218 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:53.218 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:53.218 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.476 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.476 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.476 01:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.476 01:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:53.734 { 00:20:53.734 "cntlid": 119, 00:20:53.734 "qid": 0, 00:20:53.734 "state": "enabled", 00:20:53.734 "listen_address": { 00:20:53.734 "trtype": "TCP", 00:20:53.734 "adrfam": "IPv4", 00:20:53.734 "traddr": "10.0.0.2", 00:20:53.734 "trsvcid": "4420" 00:20:53.734 }, 00:20:53.734 "peer_address": { 00:20:53.734 "trtype": "TCP", 00:20:53.734 "adrfam": "IPv4", 00:20:53.734 "traddr": "10.0.0.1", 00:20:53.734 "trsvcid": "42198" 00:20:53.734 }, 00:20:53.734 "auth": { 00:20:53.734 "state": "completed", 00:20:53.734 "digest": "sha512", 00:20:53.734 "dhgroup": "ffdhe3072" 00:20:53.734 } 00:20:53.734 } 00:20:53.734 ]' 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.734 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.991 01:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.925 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.183 01:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.814 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:55.814 { 00:20:55.814 "cntlid": 121, 00:20:55.814 "qid": 0, 00:20:55.814 "state": "enabled", 00:20:55.814 "listen_address": { 00:20:55.814 "trtype": "TCP", 00:20:55.814 "adrfam": "IPv4", 00:20:55.814 "traddr": "10.0.0.2", 00:20:55.814 "trsvcid": "4420" 00:20:55.814 }, 00:20:55.814 "peer_address": { 00:20:55.814 "trtype": "TCP", 00:20:55.814 "adrfam": "IPv4", 00:20:55.814 "traddr": "10.0.0.1", 00:20:55.814 "trsvcid": "42234" 00:20:55.814 }, 00:20:55.814 "auth": { 00:20:55.814 "state": "completed", 00:20:55.814 "digest": "sha512", 00:20:55.814 "dhgroup": "ffdhe4096" 00:20:55.814 } 00:20:55.814 } 00:20:55.814 ]' 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.814 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:56.072 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.072 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:56.072 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.072 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.072 01:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.329 01:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.260 01:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.517 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:20:57.517 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:57.517 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.517 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.518 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.775 00:20:57.775 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.775 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:57.775 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:58.032 { 00:20:58.032 "cntlid": 123, 00:20:58.032 "qid": 0, 00:20:58.032 "state": "enabled", 00:20:58.032 "listen_address": { 00:20:58.032 "trtype": "TCP", 00:20:58.032 "adrfam": "IPv4", 00:20:58.032 "traddr": "10.0.0.2", 00:20:58.032 "trsvcid": "4420" 00:20:58.032 }, 00:20:58.032 "peer_address": { 00:20:58.032 "trtype": "TCP", 00:20:58.032 "adrfam": "IPv4", 00:20:58.032 "traddr": "10.0.0.1", 00:20:58.032 "trsvcid": "35240" 00:20:58.032 }, 00:20:58.032 "auth": { 00:20:58.032 "state": "completed", 00:20:58.032 "digest": "sha512", 00:20:58.032 "dhgroup": "ffdhe4096" 00:20:58.032 } 00:20:58.032 } 00:20:58.032 ]' 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.032 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:58.290 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.290 01:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:58.290 01:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.290 01:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.290 01:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.548 01:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.482 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:59.740 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.741 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.999 00:21:00.257 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:00.257 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:00.257 01:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:00.515 { 00:21:00.515 "cntlid": 125, 00:21:00.515 "qid": 0, 00:21:00.515 "state": "enabled", 00:21:00.515 "listen_address": { 00:21:00.515 "trtype": "TCP", 00:21:00.515 "adrfam": "IPv4", 00:21:00.515 "traddr": "10.0.0.2", 00:21:00.515 "trsvcid": "4420" 00:21:00.515 }, 00:21:00.515 "peer_address": { 00:21:00.515 "trtype": "TCP", 00:21:00.515 "adrfam": "IPv4", 00:21:00.515 "traddr": "10.0.0.1", 00:21:00.515 "trsvcid": "35264" 00:21:00.515 }, 00:21:00.515 "auth": { 00:21:00.515 "state": "completed", 00:21:00.515 "digest": "sha512", 00:21:00.515 "dhgroup": "ffdhe4096" 00:21:00.515 } 00:21:00.515 } 00:21:00.515 ]' 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.515 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.772 01:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.704 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.961 01:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.527 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.527 01:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.785 { 00:21:02.785 "cntlid": 127, 00:21:02.785 "qid": 0, 00:21:02.785 "state": "enabled", 00:21:02.785 "listen_address": { 00:21:02.785 "trtype": "TCP", 00:21:02.785 "adrfam": "IPv4", 00:21:02.785 "traddr": "10.0.0.2", 00:21:02.785 "trsvcid": "4420" 00:21:02.785 }, 00:21:02.785 "peer_address": { 00:21:02.785 "trtype": "TCP", 00:21:02.785 "adrfam": "IPv4", 00:21:02.785 "traddr": "10.0.0.1", 00:21:02.785 "trsvcid": "35300" 00:21:02.785 }, 00:21:02.785 "auth": { 00:21:02.785 "state": "completed", 00:21:02.785 "digest": "sha512", 00:21:02.785 "dhgroup": "ffdhe4096" 00:21:02.785 } 00:21:02.785 } 00:21:02.785 ]' 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.785 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.043 01:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.976 01:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:04.234 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:04.801 00:21:04.801 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:04.801 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:04.801 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:05.059 { 00:21:05.059 "cntlid": 129, 00:21:05.059 "qid": 0, 00:21:05.059 "state": "enabled", 00:21:05.059 "listen_address": { 00:21:05.059 "trtype": "TCP", 00:21:05.059 "adrfam": "IPv4", 00:21:05.059 "traddr": "10.0.0.2", 00:21:05.059 "trsvcid": "4420" 00:21:05.059 }, 00:21:05.059 "peer_address": { 00:21:05.059 "trtype": "TCP", 00:21:05.059 "adrfam": "IPv4", 00:21:05.059 "traddr": "10.0.0.1", 00:21:05.059 "trsvcid": "35326" 00:21:05.059 }, 00:21:05.059 "auth": { 00:21:05.059 "state": "completed", 00:21:05.059 "digest": "sha512", 00:21:05.059 "dhgroup": "ffdhe6144" 00:21:05.059 } 00:21:05.059 } 00:21:05.059 ]' 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.059 01:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:05.317 01:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.317 01:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:05.317 01:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.317 01:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.317 01:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.575 01:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.513 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:06.770 01:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:07.336 00:21:07.336 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:07.336 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:07.336 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:07.594 { 00:21:07.594 "cntlid": 131, 00:21:07.594 "qid": 0, 00:21:07.594 "state": "enabled", 00:21:07.594 "listen_address": { 00:21:07.594 "trtype": "TCP", 00:21:07.594 "adrfam": "IPv4", 00:21:07.594 "traddr": "10.0.0.2", 00:21:07.594 "trsvcid": "4420" 00:21:07.594 }, 00:21:07.594 "peer_address": { 00:21:07.594 "trtype": "TCP", 00:21:07.594 "adrfam": "IPv4", 00:21:07.594 "traddr": "10.0.0.1", 00:21:07.594 "trsvcid": "35352" 00:21:07.594 }, 00:21:07.594 "auth": { 00:21:07.594 "state": "completed", 00:21:07.594 "digest": "sha512", 00:21:07.594 "dhgroup": "ffdhe6144" 00:21:07.594 } 00:21:07.594 } 00:21:07.594 ]' 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.594 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.852 01:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.785 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:09.350 01:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:09.607 00:21:09.864 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:09.864 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:09.864 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.864 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.121 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.121 01:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.121 01:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.121 01:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.121 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:10.121 { 00:21:10.121 "cntlid": 133, 00:21:10.121 "qid": 0, 00:21:10.121 "state": "enabled", 00:21:10.121 "listen_address": { 00:21:10.121 "trtype": "TCP", 00:21:10.121 "adrfam": "IPv4", 00:21:10.121 "traddr": "10.0.0.2", 00:21:10.121 "trsvcid": "4420" 00:21:10.121 }, 00:21:10.121 "peer_address": { 00:21:10.121 "trtype": "TCP", 00:21:10.121 "adrfam": "IPv4", 00:21:10.122 "traddr": "10.0.0.1", 00:21:10.122 "trsvcid": "33106" 00:21:10.122 }, 00:21:10.122 "auth": { 00:21:10.122 "state": "completed", 00:21:10.122 "digest": "sha512", 00:21:10.122 "dhgroup": "ffdhe6144" 00:21:10.122 } 00:21:10.122 } 00:21:10.122 ]' 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.122 01:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.380 01:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.313 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.571 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.173 00:21:12.173 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:12.173 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:12.173 01:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:12.431 { 00:21:12.431 "cntlid": 135, 00:21:12.431 "qid": 0, 00:21:12.431 "state": "enabled", 00:21:12.431 "listen_address": { 00:21:12.431 "trtype": "TCP", 00:21:12.431 "adrfam": "IPv4", 00:21:12.431 "traddr": "10.0.0.2", 00:21:12.431 "trsvcid": "4420" 00:21:12.431 }, 00:21:12.431 "peer_address": { 00:21:12.431 "trtype": "TCP", 00:21:12.431 "adrfam": "IPv4", 00:21:12.431 "traddr": "10.0.0.1", 00:21:12.431 "trsvcid": "33124" 00:21:12.431 }, 00:21:12.431 "auth": { 00:21:12.431 "state": "completed", 00:21:12.431 "digest": "sha512", 00:21:12.431 "dhgroup": "ffdhe6144" 00:21:12.431 } 00:21:12.431 } 00:21:12.431 ]' 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.431 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.688 01:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.617 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.875 01:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:14.809 00:21:14.809 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:14.809 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:14.809 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:15.067 { 00:21:15.067 "cntlid": 137, 00:21:15.067 "qid": 0, 00:21:15.067 "state": "enabled", 00:21:15.067 "listen_address": { 00:21:15.067 "trtype": "TCP", 00:21:15.067 "adrfam": "IPv4", 00:21:15.067 "traddr": "10.0.0.2", 00:21:15.067 "trsvcid": "4420" 00:21:15.067 }, 00:21:15.067 "peer_address": { 00:21:15.067 "trtype": "TCP", 00:21:15.067 "adrfam": "IPv4", 00:21:15.067 "traddr": "10.0.0.1", 00:21:15.067 "trsvcid": "33166" 00:21:15.067 }, 00:21:15.067 "auth": { 00:21:15.067 "state": "completed", 00:21:15.067 "digest": "sha512", 00:21:15.067 "dhgroup": "ffdhe8192" 00:21:15.067 } 00:21:15.067 } 00:21:15.067 ]' 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.067 01:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.325 01:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.257 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.515 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:21:16.515 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:16.515 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.515 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.515 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.516 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:16.516 01:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.516 01:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.516 01:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.516 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:16.516 01:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:17.449 00:21:17.449 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:17.449 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:17.449 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:17.707 { 00:21:17.707 "cntlid": 139, 00:21:17.707 "qid": 0, 00:21:17.707 "state": "enabled", 00:21:17.707 "listen_address": { 00:21:17.707 "trtype": "TCP", 00:21:17.707 "adrfam": "IPv4", 00:21:17.707 "traddr": "10.0.0.2", 00:21:17.707 "trsvcid": "4420" 00:21:17.707 }, 00:21:17.707 "peer_address": { 00:21:17.707 "trtype": "TCP", 00:21:17.707 "adrfam": "IPv4", 00:21:17.707 "traddr": "10.0.0.1", 00:21:17.707 "trsvcid": "33186" 00:21:17.707 }, 00:21:17.707 "auth": { 00:21:17.707 "state": "completed", 00:21:17.707 "digest": "sha512", 00:21:17.707 "dhgroup": "ffdhe8192" 00:21:17.707 } 00:21:17.707 } 00:21:17.707 ]' 00:21:17.707 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.965 01:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.222 01:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:OTA3OGEyNWYzOGNjNTZmNmU4MjRmM2RiZTY1YjEyOWVUtDjs: 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:19.155 01:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.156 01:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.413 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:19.414 01:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:20.347 00:21:20.347 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:20.347 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.347 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:20.604 { 00:21:20.604 "cntlid": 141, 00:21:20.604 "qid": 0, 00:21:20.604 "state": "enabled", 00:21:20.604 "listen_address": { 00:21:20.604 "trtype": "TCP", 00:21:20.604 "adrfam": "IPv4", 00:21:20.604 "traddr": "10.0.0.2", 00:21:20.604 "trsvcid": "4420" 00:21:20.604 }, 00:21:20.604 "peer_address": { 00:21:20.604 "trtype": "TCP", 00:21:20.604 "adrfam": "IPv4", 00:21:20.604 "traddr": "10.0.0.1", 00:21:20.604 "trsvcid": "52936" 00:21:20.604 }, 00:21:20.604 "auth": { 00:21:20.604 "state": "completed", 00:21:20.604 "digest": "sha512", 00:21:20.604 "dhgroup": "ffdhe8192" 00:21:20.604 } 00:21:20.604 } 00:21:20.604 ]' 00:21:20.604 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:20.605 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.605 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:20.605 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.605 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:20.862 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.862 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.862 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.119 01:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:YTVlNjI2YzlhZmNjYTkwZGM1YjI5Y2ExZDYxODM3YWFhM2E2YjY4NTU3YjU1MzUyS2g3Pw==: 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.052 01:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.985 00:21:22.985 01:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:22.985 01:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:22.985 01:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.242 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.242 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.242 01:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:23.243 { 00:21:23.243 "cntlid": 143, 00:21:23.243 "qid": 0, 00:21:23.243 "state": "enabled", 00:21:23.243 "listen_address": { 00:21:23.243 "trtype": "TCP", 00:21:23.243 "adrfam": "IPv4", 00:21:23.243 "traddr": "10.0.0.2", 00:21:23.243 "trsvcid": "4420" 00:21:23.243 }, 00:21:23.243 "peer_address": { 00:21:23.243 "trtype": "TCP", 00:21:23.243 "adrfam": "IPv4", 00:21:23.243 "traddr": "10.0.0.1", 00:21:23.243 "trsvcid": "52962" 00:21:23.243 }, 00:21:23.243 "auth": { 00:21:23.243 "state": "completed", 00:21:23.243 "digest": "sha512", 00:21:23.243 "dhgroup": "ffdhe8192" 00:21:23.243 } 00:21:23.243 } 00:21:23.243 ]' 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.243 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:23.500 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.500 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.500 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.758 01:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZDZhMmUxOGNmZjA4YTJjYTliNGYyOWIwMTlkMTIxZTY5NTMzZTdmODliNmE4YTI3NzA1N2Q5YjZjNzFlNzdlNnIXNiA=: 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.690 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:24.948 01:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:25.881 00:21:25.881 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:25.881 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:25.881 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:26.138 { 00:21:26.138 "cntlid": 145, 00:21:26.138 "qid": 0, 00:21:26.138 "state": "enabled", 00:21:26.138 "listen_address": { 00:21:26.138 "trtype": "TCP", 00:21:26.138 "adrfam": "IPv4", 00:21:26.138 "traddr": "10.0.0.2", 00:21:26.138 "trsvcid": "4420" 00:21:26.138 }, 00:21:26.138 "peer_address": { 00:21:26.138 "trtype": "TCP", 00:21:26.138 "adrfam": "IPv4", 00:21:26.138 "traddr": "10.0.0.1", 00:21:26.138 "trsvcid": "52976" 00:21:26.138 }, 00:21:26.138 "auth": { 00:21:26.138 "state": "completed", 00:21:26.138 "digest": "sha512", 00:21:26.138 "dhgroup": "ffdhe8192" 00:21:26.138 } 00:21:26.138 } 00:21:26.138 ]' 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.138 01:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.396 01:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:ZjAwYmIyOTNiNjRjOTU3NWY0YTk3M2RiMjAxNjViNTYwZjI1ZjQzN2RiY2MzMjkyW5t2hg==: 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.329 01:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.263 request: 00:21:28.263 { 00:21:28.263 "name": "nvme0", 00:21:28.263 "trtype": "tcp", 00:21:28.263 "traddr": "10.0.0.2", 00:21:28.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:28.263 "adrfam": "ipv4", 00:21:28.263 "trsvcid": "4420", 00:21:28.263 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.263 "dhchap_key": "key2", 00:21:28.263 "method": "bdev_nvme_attach_controller", 00:21:28.263 "req_id": 1 00:21:28.263 } 00:21:28.263 Got JSON-RPC error response 00:21:28.263 response: 00:21:28.263 { 00:21:28.263 "code": -32602, 00:21:28.263 "message": "Invalid parameters" 00:21:28.263 } 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4060632 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 4060632 ']' 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 4060632 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4060632 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4060632' 00:21:28.263 killing process with pid 4060632 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 4060632 00:21:28.263 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 4060632 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.858 rmmod nvme_tcp 00:21:28.858 rmmod nvme_fabrics 00:21:28.858 rmmod nvme_keyring 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4060607 ']' 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4060607 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 4060607 ']' 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 4060607 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4060607 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4060607' 00:21:28.858 killing process with pid 4060607 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 4060607 00:21:28.858 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 4060607 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.117 01:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.015 01:49:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.015 01:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.81K /tmp/spdk.key-sha256.UwB /tmp/spdk.key-sha384.dND /tmp/spdk.key-sha512.uXN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:31.015 00:21:31.015 real 2m57.539s 00:21:31.015 user 6m52.370s 00:21:31.015 sys 0m21.139s 00:21:31.015 01:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:31.015 01:49:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.015 ************************************ 00:21:31.015 END TEST nvmf_auth_target 00:21:31.015 ************************************ 00:21:31.015 01:49:54 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:31.015 01:49:54 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:31.015 01:49:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:21:31.015 01:49:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:31.015 01:49:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.015 ************************************ 00:21:31.015 START TEST nvmf_bdevio_no_huge 00:21:31.015 ************************************ 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:31.015 * Looking for test storage... 00:21:31.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.015 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.016 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.274 01:49:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:33.802 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:33.802 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.802 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:33.803 Found net devices under 0000:09:00.0: cvl_0_0 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:33.803 Found net devices under 0000:09:00.1: cvl_0_1 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:21:33.803 00:21:33.803 --- 10.0.0.2 ping statistics --- 00:21:33.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.803 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:33.803 00:21:33.803 --- 10.0.0.1 ping statistics --- 00:21:33.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.803 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4084530 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4084530 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 4084530 ']' 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:33.803 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:33.803 [2024-05-15 01:49:57.649109] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:33.803 [2024-05-15 01:49:57.649185] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:33.803 [2024-05-15 01:49:57.726734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.062 [2024-05-15 01:49:57.808440] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.062 [2024-05-15 01:49:57.808517] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.062 [2024-05-15 01:49:57.808531] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.062 [2024-05-15 01:49:57.808541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.062 [2024-05-15 01:49:57.808551] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.062 [2024-05-15 01:49:57.808603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.062 [2024-05-15 01:49:57.808660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:34.062 [2024-05-15 01:49:57.808725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:34.062 [2024-05-15 01:49:57.808728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 [2024-05-15 01:49:57.927340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 Malloc0 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:34.062 [2024-05-15 01:49:57.964535] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:34.062 [2024-05-15 01:49:57.964866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.062 { 00:21:34.062 "params": { 00:21:34.062 "name": "Nvme$subsystem", 00:21:34.062 "trtype": "$TEST_TRANSPORT", 00:21:34.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.062 "adrfam": "ipv4", 00:21:34.062 "trsvcid": "$NVMF_PORT", 00:21:34.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.062 "hdgst": ${hdgst:-false}, 00:21:34.062 "ddgst": ${ddgst:-false} 00:21:34.062 }, 00:21:34.062 "method": "bdev_nvme_attach_controller" 00:21:34.062 } 00:21:34.062 EOF 00:21:34.062 )") 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:34.062 01:49:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:34.062 "params": { 00:21:34.062 "name": "Nvme1", 00:21:34.062 "trtype": "tcp", 00:21:34.062 "traddr": "10.0.0.2", 00:21:34.062 "adrfam": "ipv4", 00:21:34.062 "trsvcid": "4420", 00:21:34.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.062 "hdgst": false, 00:21:34.062 "ddgst": false 00:21:34.062 }, 00:21:34.062 "method": "bdev_nvme_attach_controller" 00:21:34.062 }' 00:21:34.320 [2024-05-15 01:49:58.008971] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:34.320 [2024-05-15 01:49:58.009055] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4084561 ] 00:21:34.320 [2024-05-15 01:49:58.081014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:34.320 [2024-05-15 01:49:58.167364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.320 [2024-05-15 01:49:58.167411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.320 [2024-05-15 01:49:58.167415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.577 I/O targets: 00:21:34.577 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:34.577 00:21:34.577 00:21:34.577 CUnit - A unit testing framework for C - Version 2.1-3 00:21:34.577 http://cunit.sourceforge.net/ 00:21:34.577 00:21:34.577 00:21:34.577 Suite: bdevio tests on: Nvme1n1 00:21:34.835 Test: blockdev write read block ...passed 00:21:34.835 Test: blockdev write zeroes read block ...passed 00:21:34.835 Test: blockdev write zeroes read no split ...passed 00:21:34.835 Test: blockdev write zeroes read split ...passed 00:21:34.835 Test: blockdev write zeroes read split partial ...passed 00:21:34.835 Test: blockdev reset ...[2024-05-15 01:49:58.642324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.835 [2024-05-15 01:49:58.642441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeb160 (9): Bad file descriptor 00:21:34.835 [2024-05-15 01:49:58.700091] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.835 passed 00:21:34.835 Test: blockdev write read 8 blocks ...passed 00:21:34.835 Test: blockdev write read size > 128k ...passed 00:21:34.835 Test: blockdev write read invalid size ...passed 00:21:35.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:35.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:35.092 Test: blockdev write read max offset ...passed 00:21:35.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:35.092 Test: blockdev writev readv 8 blocks ...passed 00:21:35.092 Test: blockdev writev readv 30 x 1block ...passed 00:21:35.092 Test: blockdev writev readv block ...passed 00:21:35.092 Test: blockdev writev readv size > 128k ...passed 00:21:35.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:35.092 Test: blockdev comparev and writev ...[2024-05-15 01:49:58.955437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.092 [2024-05-15 01:49:58.955474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.092 [2024-05-15 01:49:58.955499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.092 [2024-05-15 01:49:58.955516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:35.092 [2024-05-15 01:49:58.955857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.092 [2024-05-15 01:49:58.955881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:35.092 [2024-05-15 01:49:58.955903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.092 [2024-05-15 01:49:58.955919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:35.093 [2024-05-15 01:49:58.956276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.093 [2024-05-15 01:49:58.956301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:35.093 [2024-05-15 01:49:58.956323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.093 [2024-05-15 01:49:58.956339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:35.093 [2024-05-15 01:49:58.956634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.093 [2024-05-15 01:49:58.956658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:35.093 [2024-05-15 01:49:58.956680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:35.093 [2024-05-15 01:49:58.956695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:35.093 passed 00:21:35.350 Test: blockdev nvme passthru rw ...passed 00:21:35.350 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:49:59.038474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:35.350 [2024-05-15 01:49:59.038501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:35.350 [2024-05-15 01:49:59.038648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:35.350 [2024-05-15 01:49:59.038670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:35.350 [2024-05-15 01:49:59.038817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:35.350 [2024-05-15 01:49:59.038839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:35.350 [2024-05-15 01:49:59.038989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:35.350 [2024-05-15 01:49:59.039011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:35.350 passed 00:21:35.350 Test: blockdev nvme admin passthru ...passed 00:21:35.350 Test: blockdev copy ...passed 00:21:35.351 00:21:35.351 Run Summary: Type Total Ran Passed Failed Inactive 00:21:35.351 suites 1 1 n/a 0 0 00:21:35.351 tests 23 23 23 0 0 00:21:35.351 asserts 152 152 152 0 n/a 00:21:35.351 00:21:35.351 Elapsed time = 1.224 seconds 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.609 rmmod nvme_tcp 00:21:35.609 rmmod nvme_fabrics 00:21:35.609 rmmod nvme_keyring 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4084530 ']' 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4084530 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 4084530 ']' 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 4084530 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4084530 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4084530' 00:21:35.609 killing process with pid 4084530 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 4084530 00:21:35.609 [2024-05-15 01:49:59.489716] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:35.609 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 4084530 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.177 01:49:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.077 01:50:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:38.078 00:21:38.078 real 0m7.035s 00:21:38.078 user 0m11.261s 00:21:38.078 sys 0m2.862s 00:21:38.078 01:50:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:38.078 01:50:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:38.078 ************************************ 00:21:38.078 END TEST nvmf_bdevio_no_huge 00:21:38.078 ************************************ 00:21:38.078 01:50:01 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:38.078 01:50:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:38.078 01:50:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:38.078 01:50:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:38.078 ************************************ 00:21:38.078 START TEST nvmf_tls 00:21:38.078 ************************************ 00:21:38.078 01:50:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:38.335 * Looking for test storage... 00:21:38.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.336 01:50:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:40.862 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:40.862 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:40.862 Found net devices under 0000:09:00.0: cvl_0_0 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:40.862 Found net devices under 0000:09:00.1: cvl_0_1 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.862 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:21:40.863 00:21:40.863 --- 10.0.0.2 ping statistics --- 00:21:40.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.863 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:40.863 00:21:40.863 --- 10.0.0.1 ping statistics --- 00:21:40.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.863 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4087043 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4087043 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4087043 ']' 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:40.863 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.863 [2024-05-15 01:50:04.767043] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:40.863 [2024-05-15 01:50:04.767127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.120 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.120 [2024-05-15 01:50:04.841374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.120 [2024-05-15 01:50:04.929845] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.120 [2024-05-15 01:50:04.929896] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.120 [2024-05-15 01:50:04.929922] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.120 [2024-05-15 01:50:04.929936] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.120 [2024-05-15 01:50:04.929949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.120 [2024-05-15 01:50:04.929999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:41.120 01:50:04 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:41.378 true 00:21:41.378 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:41.378 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:41.635 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:41.635 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:41.635 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:41.893 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:41.893 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:42.152 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:42.152 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:42.152 01:50:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:42.409 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:42.409 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:42.667 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:42.667 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:42.667 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:42.667 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:42.925 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:42.925 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:42.925 01:50:06 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:43.183 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:43.183 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:43.441 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:43.441 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:43.441 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:43.699 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:43.699 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.v6MwYIc7PY 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.4qRAmPpLqd 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.v6MwYIc7PY 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4qRAmPpLqd 00:21:43.957 01:50:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:44.215 01:50:08 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:44.780 01:50:08 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.v6MwYIc7PY 00:21:44.780 01:50:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.v6MwYIc7PY 00:21:44.780 01:50:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:44.780 [2024-05-15 01:50:08.672476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.780 01:50:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:45.038 01:50:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:45.297 [2024-05-15 01:50:09.153739] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:45.297 [2024-05-15 01:50:09.153858] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.297 [2024-05-15 01:50:09.154103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.297 01:50:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:45.555 malloc0 00:21:45.555 01:50:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:45.812 01:50:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v6MwYIc7PY 00:21:46.070 [2024-05-15 01:50:09.930406] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:46.070 01:50:09 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.v6MwYIc7PY 00:21:46.071 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.324 Initializing NVMe Controllers 00:21:58.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:58.324 Initialization complete. Launching workers. 00:21:58.324 ======================================================== 00:21:58.324 Latency(us) 00:21:58.324 Device Information : IOPS MiB/s Average min max 00:21:58.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7749.88 30.27 8260.76 1207.49 9074.83 00:21:58.324 ======================================================== 00:21:58.324 Total : 7749.88 30.27 8260.76 1207.49 9074.83 00:21:58.324 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v6MwYIc7PY 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v6MwYIc7PY' 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4088929 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4088929 /var/tmp/bdevperf.sock 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4088929 ']' 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.324 [2024-05-15 01:50:20.100332] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:21:58.324 [2024-05-15 01:50:20.100424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4088929 ] 00:21:58.324 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.324 [2024-05-15 01:50:20.171231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.324 [2024-05-15 01:50:20.257836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v6MwYIc7PY 00:21:58.324 [2024-05-15 01:50:20.587420] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.324 [2024-05-15 01:50:20.587564] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:58.324 TLSTESTn1 00:21:58.324 01:50:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:58.324 Running I/O for 10 seconds... 00:22:08.286 00:22:08.286 Latency(us) 00:22:08.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.286 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:08.286 Verification LBA range: start 0x0 length 0x2000 00:22:08.286 TLSTESTn1 : 10.03 2890.78 11.29 0.00 0.00 44203.73 7670.14 63302.92 00:22:08.286 =================================================================================================================== 00:22:08.286 Total : 2890.78 11.29 0.00 0.00 44203.73 7670.14 63302.92 00:22:08.286 0 00:22:08.286 01:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.286 01:50:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4088929 00:22:08.286 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4088929 ']' 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4088929 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4088929 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4088929' 00:22:08.287 killing process with pid 4088929 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4088929 00:22:08.287 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.287 00:22:08.287 Latency(us) 00:22:08.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.287 =================================================================================================================== 00:22:08.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.287 [2024-05-15 01:50:30.882925] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.287 01:50:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4088929 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4qRAmPpLqd 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4qRAmPpLqd 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4qRAmPpLqd 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4qRAmPpLqd' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4090129 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4090129 /var/tmp/bdevperf.sock 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4090129 ']' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 [2024-05-15 01:50:31.119323] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:08.287 [2024-05-15 01:50:31.119416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090129 ] 00:22:08.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.287 [2024-05-15 01:50:31.186971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.287 [2024-05-15 01:50:31.268850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4qRAmPpLqd 00:22:08.287 [2024-05-15 01:50:31.590719] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.287 [2024-05-15 01:50:31.590845] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:08.287 [2024-05-15 01:50:31.597309] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:08.287 [2024-05-15 01:50:31.597736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4d700 (107): Transport endpoint is not connected 00:22:08.287 [2024-05-15 01:50:31.598727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4d700 (9): Bad file descriptor 00:22:08.287 [2024-05-15 01:50:31.599727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:08.287 [2024-05-15 01:50:31.599749] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:08.287 [2024-05-15 01:50:31.599767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.287 request: 00:22:08.287 { 00:22:08.287 "name": "TLSTEST", 00:22:08.287 "trtype": "tcp", 00:22:08.287 "traddr": "10.0.0.2", 00:22:08.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:08.287 "adrfam": "ipv4", 00:22:08.287 "trsvcid": "4420", 00:22:08.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.287 "psk": "/tmp/tmp.4qRAmPpLqd", 00:22:08.287 "method": "bdev_nvme_attach_controller", 00:22:08.287 "req_id": 1 00:22:08.287 } 00:22:08.287 Got JSON-RPC error response 00:22:08.287 response: 00:22:08.287 { 00:22:08.287 "code": -32602, 00:22:08.287 "message": "Invalid parameters" 00:22:08.287 } 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4090129 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4090129 ']' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4090129 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4090129 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4090129' 00:22:08.287 killing process with pid 4090129 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4090129 00:22:08.287 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.287 00:22:08.287 Latency(us) 00:22:08.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.287 =================================================================================================================== 00:22:08.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.287 [2024-05-15 01:50:31.651524] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4090129 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.v6MwYIc7PY 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.v6MwYIc7PY 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.v6MwYIc7PY 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v6MwYIc7PY' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4090261 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4090261 /var/tmp/bdevperf.sock 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4090261 ']' 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.287 01:50:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 [2024-05-15 01:50:31.914163] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:08.288 [2024-05-15 01:50:31.914260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090261 ] 00:22:08.288 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.288 [2024-05-15 01:50:31.982836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.288 [2024-05-15 01:50:32.063837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.288 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:08.288 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:08.288 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.v6MwYIc7PY 00:22:08.545 [2024-05-15 01:50:32.443526] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.545 [2024-05-15 01:50:32.443670] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:08.545 [2024-05-15 01:50:32.449432] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:08.545 [2024-05-15 01:50:32.449469] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:08.545 [2024-05-15 01:50:32.449523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:08.545 [2024-05-15 01:50:32.450460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1700 (107): Transport endpoint is not connected 00:22:08.545 [2024-05-15 01:50:32.451453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1700 (9): Bad file descriptor 00:22:08.545 [2024-05-15 01:50:32.452451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:08.545 [2024-05-15 01:50:32.452474] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:08.545 [2024-05-15 01:50:32.452492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.545 request: 00:22:08.545 { 00:22:08.545 "name": "TLSTEST", 00:22:08.545 "trtype": "tcp", 00:22:08.545 "traddr": "10.0.0.2", 00:22:08.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:08.545 "adrfam": "ipv4", 00:22:08.545 "trsvcid": "4420", 00:22:08.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.545 "psk": "/tmp/tmp.v6MwYIc7PY", 00:22:08.545 "method": "bdev_nvme_attach_controller", 00:22:08.545 "req_id": 1 00:22:08.545 } 00:22:08.545 Got JSON-RPC error response 00:22:08.545 response: 00:22:08.545 { 00:22:08.545 "code": -32602, 00:22:08.545 "message": "Invalid parameters" 00:22:08.545 } 00:22:08.545 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4090261 00:22:08.545 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4090261 ']' 00:22:08.545 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4090261 00:22:08.545 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:08.545 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:08.545 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4090261 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4090261' 00:22:08.803 killing process with pid 4090261 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4090261 00:22:08.803 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.803 00:22:08.803 Latency(us) 00:22:08.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.803 =================================================================================================================== 00:22:08.803 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.803 [2024-05-15 01:50:32.500981] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4090261 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.v6MwYIc7PY 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.v6MwYIc7PY 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.v6MwYIc7PY 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v6MwYIc7PY' 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4090402 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4090402 /var/tmp/bdevperf.sock 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4090402 ']' 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.803 01:50:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.061 [2024-05-15 01:50:32.758885] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:09.061 [2024-05-15 01:50:32.758974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090402 ] 00:22:09.061 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.061 [2024-05-15 01:50:32.826105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.061 [2024-05-15 01:50:32.908303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.319 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:09.319 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:09.319 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v6MwYIc7PY 00:22:09.319 [2024-05-15 01:50:33.248173] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.319 [2024-05-15 01:50:33.248333] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:09.577 [2024-05-15 01:50:33.253654] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:09.577 [2024-05-15 01:50:33.253690] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:09.577 [2024-05-15 01:50:33.253730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:09.577 [2024-05-15 01:50:33.254270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd63700 (107): Transport endpoint is not connected 00:22:09.577 [2024-05-15 01:50:33.255259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd63700 (9): Bad file descriptor 00:22:09.577 [2024-05-15 01:50:33.256258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:09.577 [2024-05-15 01:50:33.256297] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:09.577 [2024-05-15 01:50:33.256317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:09.577 request: 00:22:09.577 { 00:22:09.577 "name": "TLSTEST", 00:22:09.577 "trtype": "tcp", 00:22:09.577 "traddr": "10.0.0.2", 00:22:09.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.577 "adrfam": "ipv4", 00:22:09.577 "trsvcid": "4420", 00:22:09.577 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.577 "psk": "/tmp/tmp.v6MwYIc7PY", 00:22:09.577 "method": "bdev_nvme_attach_controller", 00:22:09.577 "req_id": 1 00:22:09.577 } 00:22:09.577 Got JSON-RPC error response 00:22:09.577 response: 00:22:09.577 { 00:22:09.577 "code": -32602, 00:22:09.577 "message": "Invalid parameters" 00:22:09.577 } 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4090402 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4090402 ']' 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4090402 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4090402 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4090402' 00:22:09.577 killing process with pid 4090402 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4090402 00:22:09.577 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.577 00:22:09.577 Latency(us) 00:22:09.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.577 =================================================================================================================== 00:22:09.577 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.577 [2024-05-15 01:50:33.308699] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:09.577 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4090402 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4090424 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4090424 /var/tmp/bdevperf.sock 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4090424 ']' 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:09.834 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.834 [2024-05-15 01:50:33.574016] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:09.834 [2024-05-15 01:50:33.574100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090424 ] 00:22:09.834 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.834 [2024-05-15 01:50:33.649358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.834 [2024-05-15 01:50:33.733127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.092 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:10.092 01:50:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:10.092 01:50:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:10.349 [2024-05-15 01:50:34.123786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:10.350 [2024-05-15 01:50:34.125706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af5dd0 (9): Bad file descriptor 00:22:10.350 [2024-05-15 01:50:34.126702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:10.350 [2024-05-15 01:50:34.126726] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:10.350 [2024-05-15 01:50:34.126744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:10.350 request: 00:22:10.350 { 00:22:10.350 "name": "TLSTEST", 00:22:10.350 "trtype": "tcp", 00:22:10.350 "traddr": "10.0.0.2", 00:22:10.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.350 "adrfam": "ipv4", 00:22:10.350 "trsvcid": "4420", 00:22:10.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.350 "method": "bdev_nvme_attach_controller", 00:22:10.350 "req_id": 1 00:22:10.350 } 00:22:10.350 Got JSON-RPC error response 00:22:10.350 response: 00:22:10.350 { 00:22:10.350 "code": -32602, 00:22:10.350 "message": "Invalid parameters" 00:22:10.350 } 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4090424 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4090424 ']' 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4090424 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4090424 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4090424' 00:22:10.350 killing process with pid 4090424 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4090424 00:22:10.350 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.350 00:22:10.350 Latency(us) 00:22:10.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.350 =================================================================================================================== 00:22:10.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:10.350 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4090424 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4087043 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4087043 ']' 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4087043 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4087043 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4087043' 00:22:10.608 killing process with pid 4087043 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4087043 00:22:10.608 [2024-05-15 01:50:34.414915] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:10.608 [2024-05-15 01:50:34.414972] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:10.608 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4087043 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ZinUbPIZll 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ZinUbPIZll 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4090574 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4090574 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4090574 ']' 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:10.866 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.866 [2024-05-15 01:50:34.724955] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:10.866 [2024-05-15 01:50:34.725041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.866 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.124 [2024-05-15 01:50:34.803624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.124 [2024-05-15 01:50:34.883398] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.124 [2024-05-15 01:50:34.883454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.124 [2024-05-15 01:50:34.883477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.124 [2024-05-15 01:50:34.883503] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.125 [2024-05-15 01:50:34.883514] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.125 [2024-05-15 01:50:34.883540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.125 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:11.125 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:11.125 01:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.125 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:11.125 01:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.125 01:50:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.125 01:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ZinUbPIZll 00:22:11.125 01:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZinUbPIZll 00:22:11.125 01:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.382 [2024-05-15 01:50:35.224961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.382 01:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:11.640 01:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:11.898 [2024-05-15 01:50:35.718239] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:11.898 [2024-05-15 01:50:35.718329] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.898 [2024-05-15 01:50:35.718558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.898 01:50:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:12.156 malloc0 00:22:12.156 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:12.414 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:12.672 [2024-05-15 01:50:36.472267] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZinUbPIZll 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZinUbPIZll' 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4090855 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4090855 /var/tmp/bdevperf.sock 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4090855 ']' 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:12.672 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.673 [2024-05-15 01:50:36.529514] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:12.673 [2024-05-15 01:50:36.529609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090855 ] 00:22:12.673 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.673 [2024-05-15 01:50:36.598488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.930 [2024-05-15 01:50:36.682251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.931 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:12.931 01:50:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:12.931 01:50:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:13.189 [2024-05-15 01:50:37.005124] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.189 [2024-05-15 01:50:37.005284] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:13.189 TLSTESTn1 00:22:13.189 01:50:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:13.446 Running I/O for 10 seconds... 00:22:23.504 00:22:23.504 Latency(us) 00:22:23.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.504 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.504 Verification LBA range: start 0x0 length 0x2000 00:22:23.504 TLSTESTn1 : 10.02 2485.86 9.71 0.00 0.00 51391.12 9563.40 45826.65 00:22:23.504 =================================================================================================================== 00:22:23.504 Total : 2485.86 9.71 0.00 0.00 51391.12 9563.40 45826.65 00:22:23.504 0 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4090855 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4090855 ']' 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4090855 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4090855 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4090855' 00:22:23.504 killing process with pid 4090855 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4090855 00:22:23.504 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.504 00:22:23.504 Latency(us) 00:22:23.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.504 =================================================================================================================== 00:22:23.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.504 [2024-05-15 01:50:47.282073] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:23.504 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4090855 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ZinUbPIZll 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZinUbPIZll 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZinUbPIZll 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZinUbPIZll 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZinUbPIZll' 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4092171 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4092171 /var/tmp/bdevperf.sock 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4092171 ']' 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:23.762 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.762 [2024-05-15 01:50:47.546799] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:23.762 [2024-05-15 01:50:47.546888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092171 ] 00:22:23.762 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.762 [2024-05-15 01:50:47.614382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.021 [2024-05-15 01:50:47.696235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.021 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:24.021 01:50:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:24.021 01:50:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:24.278 [2024-05-15 01:50:48.067919] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.278 [2024-05-15 01:50:48.068005] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:24.278 [2024-05-15 01:50:48.068019] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ZinUbPIZll 00:22:24.278 request: 00:22:24.278 { 00:22:24.278 "name": "TLSTEST", 00:22:24.278 "trtype": "tcp", 00:22:24.278 "traddr": "10.0.0.2", 00:22:24.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.278 "adrfam": "ipv4", 00:22:24.278 "trsvcid": "4420", 00:22:24.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.279 "psk": "/tmp/tmp.ZinUbPIZll", 00:22:24.279 "method": "bdev_nvme_attach_controller", 00:22:24.279 "req_id": 1 00:22:24.279 } 00:22:24.279 Got JSON-RPC error response 00:22:24.279 response: 00:22:24.279 { 00:22:24.279 "code": -1, 00:22:24.279 "message": "Operation not permitted" 00:22:24.279 } 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4092171 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4092171 ']' 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4092171 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4092171 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4092171' 00:22:24.279 killing process with pid 4092171 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4092171 00:22:24.279 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.279 00:22:24.279 Latency(us) 00:22:24.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.279 =================================================================================================================== 00:22:24.279 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.279 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4092171 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4090574 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4090574 ']' 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4090574 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4090574 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4090574' 00:22:24.536 killing process with pid 4090574 00:22:24.536 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4090574 00:22:24.536 [2024-05-15 01:50:48.357375] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:24.537 [2024-05-15 01:50:48.357438] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:24.537 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4090574 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4092313 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4092313 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4092313 ']' 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:24.795 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.795 [2024-05-15 01:50:48.648636] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:24.795 [2024-05-15 01:50:48.648733] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.795 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.053 [2024-05-15 01:50:48.728293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.053 [2024-05-15 01:50:48.813503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.053 [2024-05-15 01:50:48.813567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.053 [2024-05-15 01:50:48.813584] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.053 [2024-05-15 01:50:48.813598] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.053 [2024-05-15 01:50:48.813610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.053 [2024-05-15 01:50:48.813648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ZinUbPIZll 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZinUbPIZll 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.ZinUbPIZll 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZinUbPIZll 00:22:25.053 01:50:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.312 [2024-05-15 01:50:49.166650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.312 01:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.570 01:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:25.827 [2024-05-15 01:50:49.667956] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:25.827 [2024-05-15 01:50:49.668054] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.827 [2024-05-15 01:50:49.668320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.827 01:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.085 malloc0 00:22:26.085 01:50:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.343 01:50:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:26.601 [2024-05-15 01:50:50.421715] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:26.601 [2024-05-15 01:50:50.421752] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:26.601 [2024-05-15 01:50:50.421794] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:26.601 request: 00:22:26.601 { 00:22:26.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.601 "host": "nqn.2016-06.io.spdk:host1", 00:22:26.601 "psk": "/tmp/tmp.ZinUbPIZll", 00:22:26.601 "method": "nvmf_subsystem_add_host", 00:22:26.601 "req_id": 1 00:22:26.601 } 00:22:26.601 Got JSON-RPC error response 00:22:26.601 response: 00:22:26.601 { 00:22:26.601 "code": -32603, 00:22:26.601 "message": "Internal error" 00:22:26.601 } 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4092313 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4092313 ']' 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4092313 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4092313 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4092313' 00:22:26.601 killing process with pid 4092313 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4092313 00:22:26.601 [2024-05-15 01:50:50.476430] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:26.601 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4092313 00:22:26.859 01:50:50 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ZinUbPIZll 00:22:26.859 01:50:50 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:26.859 01:50:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.859 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:26.859 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.859 01:50:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4092546 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4092546 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4092546 ']' 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:26.860 01:50:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.860 [2024-05-15 01:50:50.763914] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:26.860 [2024-05-15 01:50:50.763990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.118 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.118 [2024-05-15 01:50:50.837538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.118 [2024-05-15 01:50:50.916991] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.118 [2024-05-15 01:50:50.917044] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.118 [2024-05-15 01:50:50.917058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.118 [2024-05-15 01:50:50.917070] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.118 [2024-05-15 01:50:50.917079] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.118 [2024-05-15 01:50:50.917105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.118 01:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:27.118 01:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:27.118 01:50:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.118 01:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:27.118 01:50:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.376 01:50:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.376 01:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ZinUbPIZll 00:22:27.376 01:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZinUbPIZll 00:22:27.376 01:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.376 [2024-05-15 01:50:51.266911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.376 01:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:27.634 01:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:27.892 [2024-05-15 01:50:51.764196] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:27.892 [2024-05-15 01:50:51.764335] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.892 [2024-05-15 01:50:51.764596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.892 01:50:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:28.150 malloc0 00:22:28.150 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:28.409 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:28.668 [2024-05-15 01:50:52.586697] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4092775 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4092775 /var/tmp/bdevperf.sock 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4092775 ']' 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:28.926 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.926 [2024-05-15 01:50:52.647087] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:28.926 [2024-05-15 01:50:52.647163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092775 ] 00:22:28.926 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.926 [2024-05-15 01:50:52.716819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.926 [2024-05-15 01:50:52.799633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.184 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:29.184 01:50:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:29.184 01:50:52 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:29.442 [2024-05-15 01:50:53.186597] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.442 [2024-05-15 01:50:53.186701] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.442 TLSTESTn1 00:22:29.442 01:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:30.010 01:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:30.010 "subsystems": [ 00:22:30.010 { 00:22:30.010 "subsystem": "keyring", 00:22:30.010 "config": [] 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "subsystem": "iobuf", 00:22:30.010 "config": [ 00:22:30.010 { 00:22:30.010 "method": "iobuf_set_options", 00:22:30.010 "params": { 00:22:30.010 "small_pool_count": 8192, 00:22:30.010 "large_pool_count": 1024, 00:22:30.010 "small_bufsize": 8192, 00:22:30.010 "large_bufsize": 135168 00:22:30.010 } 00:22:30.010 } 00:22:30.010 ] 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "subsystem": "sock", 00:22:30.010 "config": [ 00:22:30.010 { 00:22:30.010 "method": "sock_impl_set_options", 00:22:30.010 "params": { 00:22:30.010 "impl_name": "posix", 00:22:30.010 "recv_buf_size": 2097152, 00:22:30.010 "send_buf_size": 2097152, 00:22:30.010 "enable_recv_pipe": true, 00:22:30.010 "enable_quickack": false, 00:22:30.010 "enable_placement_id": 0, 00:22:30.010 "enable_zerocopy_send_server": true, 00:22:30.010 "enable_zerocopy_send_client": false, 00:22:30.010 "zerocopy_threshold": 0, 00:22:30.010 "tls_version": 0, 00:22:30.010 "enable_ktls": false 00:22:30.010 } 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "method": "sock_impl_set_options", 00:22:30.010 "params": { 00:22:30.010 "impl_name": "ssl", 00:22:30.010 "recv_buf_size": 4096, 00:22:30.010 "send_buf_size": 4096, 00:22:30.010 "enable_recv_pipe": true, 00:22:30.010 "enable_quickack": false, 00:22:30.010 "enable_placement_id": 0, 00:22:30.010 "enable_zerocopy_send_server": true, 00:22:30.010 "enable_zerocopy_send_client": false, 00:22:30.010 "zerocopy_threshold": 0, 00:22:30.010 "tls_version": 0, 00:22:30.010 "enable_ktls": false 00:22:30.010 } 00:22:30.010 } 00:22:30.010 ] 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "subsystem": "vmd", 00:22:30.010 "config": [] 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "subsystem": "accel", 00:22:30.010 "config": [ 00:22:30.010 { 00:22:30.010 "method": "accel_set_options", 00:22:30.010 "params": { 00:22:30.010 "small_cache_size": 128, 00:22:30.010 "large_cache_size": 16, 00:22:30.010 "task_count": 2048, 00:22:30.010 "sequence_count": 2048, 00:22:30.010 "buf_count": 2048 00:22:30.010 } 00:22:30.010 } 00:22:30.010 ] 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "subsystem": "bdev", 00:22:30.010 "config": [ 00:22:30.010 { 00:22:30.010 "method": "bdev_set_options", 00:22:30.010 "params": { 00:22:30.010 "bdev_io_pool_size": 65535, 00:22:30.010 "bdev_io_cache_size": 256, 00:22:30.010 "bdev_auto_examine": true, 00:22:30.010 "iobuf_small_cache_size": 128, 00:22:30.010 "iobuf_large_cache_size": 16 00:22:30.010 } 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "method": "bdev_raid_set_options", 00:22:30.010 "params": { 00:22:30.010 "process_window_size_kb": 1024 00:22:30.010 } 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "method": "bdev_iscsi_set_options", 00:22:30.010 "params": { 00:22:30.010 "timeout_sec": 30 00:22:30.010 } 00:22:30.010 }, 00:22:30.010 { 00:22:30.010 "method": "bdev_nvme_set_options", 00:22:30.010 "params": { 00:22:30.010 "action_on_timeout": "none", 00:22:30.010 "timeout_us": 0, 00:22:30.010 "timeout_admin_us": 0, 00:22:30.010 "keep_alive_timeout_ms": 10000, 00:22:30.011 "arbitration_burst": 0, 00:22:30.011 "low_priority_weight": 0, 00:22:30.011 "medium_priority_weight": 0, 00:22:30.011 "high_priority_weight": 0, 00:22:30.011 "nvme_adminq_poll_period_us": 10000, 00:22:30.011 "nvme_ioq_poll_period_us": 0, 00:22:30.011 "io_queue_requests": 0, 00:22:30.011 "delay_cmd_submit": true, 00:22:30.011 "transport_retry_count": 4, 00:22:30.011 "bdev_retry_count": 3, 00:22:30.011 "transport_ack_timeout": 0, 00:22:30.011 "ctrlr_loss_timeout_sec": 0, 00:22:30.011 "reconnect_delay_sec": 0, 00:22:30.011 "fast_io_fail_timeout_sec": 0, 00:22:30.011 "disable_auto_failback": false, 00:22:30.011 "generate_uuids": false, 00:22:30.011 "transport_tos": 0, 00:22:30.011 "nvme_error_stat": false, 00:22:30.011 "rdma_srq_size": 0, 00:22:30.011 "io_path_stat": false, 00:22:30.011 "allow_accel_sequence": false, 00:22:30.011 "rdma_max_cq_size": 0, 00:22:30.011 "rdma_cm_event_timeout_ms": 0, 00:22:30.011 "dhchap_digests": [ 00:22:30.011 "sha256", 00:22:30.011 "sha384", 00:22:30.011 "sha512" 00:22:30.011 ], 00:22:30.011 "dhchap_dhgroups": [ 00:22:30.011 "null", 00:22:30.011 "ffdhe2048", 00:22:30.011 "ffdhe3072", 00:22:30.011 "ffdhe4096", 00:22:30.011 "ffdhe6144", 00:22:30.011 "ffdhe8192" 00:22:30.011 ] 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "bdev_nvme_set_hotplug", 00:22:30.011 "params": { 00:22:30.011 "period_us": 100000, 00:22:30.011 "enable": false 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "bdev_malloc_create", 00:22:30.011 "params": { 00:22:30.011 "name": "malloc0", 00:22:30.011 "num_blocks": 8192, 00:22:30.011 "block_size": 4096, 00:22:30.011 "physical_block_size": 4096, 00:22:30.011 "uuid": "49c14244-e643-416f-8b30-e10db898eb3c", 00:22:30.011 "optimal_io_boundary": 0 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "bdev_wait_for_examine" 00:22:30.011 } 00:22:30.011 ] 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "subsystem": "nbd", 00:22:30.011 "config": [] 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "subsystem": "scheduler", 00:22:30.011 "config": [ 00:22:30.011 { 00:22:30.011 "method": "framework_set_scheduler", 00:22:30.011 "params": { 00:22:30.011 "name": "static" 00:22:30.011 } 00:22:30.011 } 00:22:30.011 ] 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "subsystem": "nvmf", 00:22:30.011 "config": [ 00:22:30.011 { 00:22:30.011 "method": "nvmf_set_config", 00:22:30.011 "params": { 00:22:30.011 "discovery_filter": "match_any", 00:22:30.011 "admin_cmd_passthru": { 00:22:30.011 "identify_ctrlr": false 00:22:30.011 } 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_set_max_subsystems", 00:22:30.011 "params": { 00:22:30.011 "max_subsystems": 1024 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_set_crdt", 00:22:30.011 "params": { 00:22:30.011 "crdt1": 0, 00:22:30.011 "crdt2": 0, 00:22:30.011 "crdt3": 0 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_create_transport", 00:22:30.011 "params": { 00:22:30.011 "trtype": "TCP", 00:22:30.011 "max_queue_depth": 128, 00:22:30.011 "max_io_qpairs_per_ctrlr": 127, 00:22:30.011 "in_capsule_data_size": 4096, 00:22:30.011 "max_io_size": 131072, 00:22:30.011 "io_unit_size": 131072, 00:22:30.011 "max_aq_depth": 128, 00:22:30.011 "num_shared_buffers": 511, 00:22:30.011 "buf_cache_size": 4294967295, 00:22:30.011 "dif_insert_or_strip": false, 00:22:30.011 "zcopy": false, 00:22:30.011 "c2h_success": false, 00:22:30.011 "sock_priority": 0, 00:22:30.011 "abort_timeout_sec": 1, 00:22:30.011 "ack_timeout": 0, 00:22:30.011 "data_wr_pool_size": 0 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_create_subsystem", 00:22:30.011 "params": { 00:22:30.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.011 "allow_any_host": false, 00:22:30.011 "serial_number": "SPDK00000000000001", 00:22:30.011 "model_number": "SPDK bdev Controller", 00:22:30.011 "max_namespaces": 10, 00:22:30.011 "min_cntlid": 1, 00:22:30.011 "max_cntlid": 65519, 00:22:30.011 "ana_reporting": false 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_subsystem_add_host", 00:22:30.011 "params": { 00:22:30.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.011 "host": "nqn.2016-06.io.spdk:host1", 00:22:30.011 "psk": "/tmp/tmp.ZinUbPIZll" 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_subsystem_add_ns", 00:22:30.011 "params": { 00:22:30.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.011 "namespace": { 00:22:30.011 "nsid": 1, 00:22:30.011 "bdev_name": "malloc0", 00:22:30.011 "nguid": "49C14244E643416F8B30E10DB898EB3C", 00:22:30.011 "uuid": "49c14244-e643-416f-8b30-e10db898eb3c", 00:22:30.011 "no_auto_visible": false 00:22:30.011 } 00:22:30.011 } 00:22:30.011 }, 00:22:30.011 { 00:22:30.011 "method": "nvmf_subsystem_add_listener", 00:22:30.011 "params": { 00:22:30.011 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.011 "listen_address": { 00:22:30.011 "trtype": "TCP", 00:22:30.011 "adrfam": "IPv4", 00:22:30.011 "traddr": "10.0.0.2", 00:22:30.011 "trsvcid": "4420" 00:22:30.011 }, 00:22:30.011 "secure_channel": true 00:22:30.011 } 00:22:30.011 } 00:22:30.011 ] 00:22:30.011 } 00:22:30.011 ] 00:22:30.011 }' 00:22:30.011 01:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:30.269 01:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:30.269 "subsystems": [ 00:22:30.269 { 00:22:30.269 "subsystem": "keyring", 00:22:30.269 "config": [] 00:22:30.269 }, 00:22:30.269 { 00:22:30.269 "subsystem": "iobuf", 00:22:30.269 "config": [ 00:22:30.269 { 00:22:30.269 "method": "iobuf_set_options", 00:22:30.269 "params": { 00:22:30.269 "small_pool_count": 8192, 00:22:30.269 "large_pool_count": 1024, 00:22:30.269 "small_bufsize": 8192, 00:22:30.269 "large_bufsize": 135168 00:22:30.269 } 00:22:30.270 } 00:22:30.270 ] 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "subsystem": "sock", 00:22:30.270 "config": [ 00:22:30.270 { 00:22:30.270 "method": "sock_impl_set_options", 00:22:30.270 "params": { 00:22:30.270 "impl_name": "posix", 00:22:30.270 "recv_buf_size": 2097152, 00:22:30.270 "send_buf_size": 2097152, 00:22:30.270 "enable_recv_pipe": true, 00:22:30.270 "enable_quickack": false, 00:22:30.270 "enable_placement_id": 0, 00:22:30.270 "enable_zerocopy_send_server": true, 00:22:30.270 "enable_zerocopy_send_client": false, 00:22:30.270 "zerocopy_threshold": 0, 00:22:30.270 "tls_version": 0, 00:22:30.270 "enable_ktls": false 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "sock_impl_set_options", 00:22:30.270 "params": { 00:22:30.270 "impl_name": "ssl", 00:22:30.270 "recv_buf_size": 4096, 00:22:30.270 "send_buf_size": 4096, 00:22:30.270 "enable_recv_pipe": true, 00:22:30.270 "enable_quickack": false, 00:22:30.270 "enable_placement_id": 0, 00:22:30.270 "enable_zerocopy_send_server": true, 00:22:30.270 "enable_zerocopy_send_client": false, 00:22:30.270 "zerocopy_threshold": 0, 00:22:30.270 "tls_version": 0, 00:22:30.270 "enable_ktls": false 00:22:30.270 } 00:22:30.270 } 00:22:30.270 ] 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "subsystem": "vmd", 00:22:30.270 "config": [] 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "subsystem": "accel", 00:22:30.270 "config": [ 00:22:30.270 { 00:22:30.270 "method": "accel_set_options", 00:22:30.270 "params": { 00:22:30.270 "small_cache_size": 128, 00:22:30.270 "large_cache_size": 16, 00:22:30.270 "task_count": 2048, 00:22:30.270 "sequence_count": 2048, 00:22:30.270 "buf_count": 2048 00:22:30.270 } 00:22:30.270 } 00:22:30.270 ] 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "subsystem": "bdev", 00:22:30.270 "config": [ 00:22:30.270 { 00:22:30.270 "method": "bdev_set_options", 00:22:30.270 "params": { 00:22:30.270 "bdev_io_pool_size": 65535, 00:22:30.270 "bdev_io_cache_size": 256, 00:22:30.270 "bdev_auto_examine": true, 00:22:30.270 "iobuf_small_cache_size": 128, 00:22:30.270 "iobuf_large_cache_size": 16 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "bdev_raid_set_options", 00:22:30.270 "params": { 00:22:30.270 "process_window_size_kb": 1024 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "bdev_iscsi_set_options", 00:22:30.270 "params": { 00:22:30.270 "timeout_sec": 30 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "bdev_nvme_set_options", 00:22:30.270 "params": { 00:22:30.270 "action_on_timeout": "none", 00:22:30.270 "timeout_us": 0, 00:22:30.270 "timeout_admin_us": 0, 00:22:30.270 "keep_alive_timeout_ms": 10000, 00:22:30.270 "arbitration_burst": 0, 00:22:30.270 "low_priority_weight": 0, 00:22:30.270 "medium_priority_weight": 0, 00:22:30.270 "high_priority_weight": 0, 00:22:30.270 "nvme_adminq_poll_period_us": 10000, 00:22:30.270 "nvme_ioq_poll_period_us": 0, 00:22:30.270 "io_queue_requests": 512, 00:22:30.270 "delay_cmd_submit": true, 00:22:30.270 "transport_retry_count": 4, 00:22:30.270 "bdev_retry_count": 3, 00:22:30.270 "transport_ack_timeout": 0, 00:22:30.270 "ctrlr_loss_timeout_sec": 0, 00:22:30.270 "reconnect_delay_sec": 0, 00:22:30.270 "fast_io_fail_timeout_sec": 0, 00:22:30.270 "disable_auto_failback": false, 00:22:30.270 "generate_uuids": false, 00:22:30.270 "transport_tos": 0, 00:22:30.270 "nvme_error_stat": false, 00:22:30.270 "rdma_srq_size": 0, 00:22:30.270 "io_path_stat": false, 00:22:30.270 "allow_accel_sequence": false, 00:22:30.270 "rdma_max_cq_size": 0, 00:22:30.270 "rdma_cm_event_timeout_ms": 0, 00:22:30.270 "dhchap_digests": [ 00:22:30.270 "sha256", 00:22:30.270 "sha384", 00:22:30.270 "sha512" 00:22:30.270 ], 00:22:30.270 "dhchap_dhgroups": [ 00:22:30.270 "null", 00:22:30.270 "ffdhe2048", 00:22:30.270 "ffdhe3072", 00:22:30.270 "ffdhe4096", 00:22:30.270 "ffdhe6144", 00:22:30.270 "ffdhe8192" 00:22:30.270 ] 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "bdev_nvme_attach_controller", 00:22:30.270 "params": { 00:22:30.270 "name": "TLSTEST", 00:22:30.270 "trtype": "TCP", 00:22:30.270 "adrfam": "IPv4", 00:22:30.270 "traddr": "10.0.0.2", 00:22:30.270 "trsvcid": "4420", 00:22:30.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.270 "prchk_reftag": false, 00:22:30.270 "prchk_guard": false, 00:22:30.270 "ctrlr_loss_timeout_sec": 0, 00:22:30.270 "reconnect_delay_sec": 0, 00:22:30.270 "fast_io_fail_timeout_sec": 0, 00:22:30.270 "psk": "/tmp/tmp.ZinUbPIZll", 00:22:30.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.270 "hdgst": false, 00:22:30.270 "ddgst": false 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "bdev_nvme_set_hotplug", 00:22:30.270 "params": { 00:22:30.270 "period_us": 100000, 00:22:30.270 "enable": false 00:22:30.270 } 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "method": "bdev_wait_for_examine" 00:22:30.270 } 00:22:30.270 ] 00:22:30.270 }, 00:22:30.270 { 00:22:30.270 "subsystem": "nbd", 00:22:30.270 "config": [] 00:22:30.270 } 00:22:30.270 ] 00:22:30.270 }' 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4092775 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4092775 ']' 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4092775 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4092775 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4092775' 00:22:30.270 killing process with pid 4092775 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4092775 00:22:30.270 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.270 00:22:30.270 Latency(us) 00:22:30.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.270 =================================================================================================================== 00:22:30.270 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.270 [2024-05-15 01:50:53.998171] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:30.270 01:50:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4092775 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4092546 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4092546 ']' 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4092546 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4092546 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4092546' 00:22:30.529 killing process with pid 4092546 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4092546 00:22:30.529 [2024-05-15 01:50:54.242475] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:30.529 [2024-05-15 01:50:54.242543] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:30.529 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4092546 00:22:30.788 01:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:30.788 01:50:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.788 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:30.788 01:50:54 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:30.788 "subsystems": [ 00:22:30.788 { 00:22:30.788 "subsystem": "keyring", 00:22:30.788 "config": [] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "iobuf", 00:22:30.788 "config": [ 00:22:30.788 { 00:22:30.788 "method": "iobuf_set_options", 00:22:30.788 "params": { 00:22:30.788 "small_pool_count": 8192, 00:22:30.788 "large_pool_count": 1024, 00:22:30.788 "small_bufsize": 8192, 00:22:30.788 "large_bufsize": 135168 00:22:30.788 } 00:22:30.788 } 00:22:30.788 ] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "sock", 00:22:30.788 "config": [ 00:22:30.788 { 00:22:30.788 "method": "sock_impl_set_options", 00:22:30.788 "params": { 00:22:30.788 "impl_name": "posix", 00:22:30.788 "recv_buf_size": 2097152, 00:22:30.788 "send_buf_size": 2097152, 00:22:30.788 "enable_recv_pipe": true, 00:22:30.788 "enable_quickack": false, 00:22:30.788 "enable_placement_id": 0, 00:22:30.788 "enable_zerocopy_send_server": true, 00:22:30.788 "enable_zerocopy_send_client": false, 00:22:30.788 "zerocopy_threshold": 0, 00:22:30.788 "tls_version": 0, 00:22:30.788 "enable_ktls": false 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "sock_impl_set_options", 00:22:30.788 "params": { 00:22:30.788 "impl_name": "ssl", 00:22:30.788 "recv_buf_size": 4096, 00:22:30.788 "send_buf_size": 4096, 00:22:30.788 "enable_recv_pipe": true, 00:22:30.788 "enable_quickack": false, 00:22:30.788 "enable_placement_id": 0, 00:22:30.788 "enable_zerocopy_send_server": true, 00:22:30.788 "enable_zerocopy_send_client": false, 00:22:30.788 "zerocopy_threshold": 0, 00:22:30.788 "tls_version": 0, 00:22:30.788 "enable_ktls": false 00:22:30.788 } 00:22:30.788 } 00:22:30.788 ] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "vmd", 00:22:30.788 "config": [] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "accel", 00:22:30.788 "config": [ 00:22:30.788 { 00:22:30.788 "method": "accel_set_options", 00:22:30.788 "params": { 00:22:30.788 "small_cache_size": 128, 00:22:30.788 "large_cache_size": 16, 00:22:30.788 "task_count": 2048, 00:22:30.788 "sequence_count": 2048, 00:22:30.788 "buf_count": 2048 00:22:30.788 } 00:22:30.788 } 00:22:30.788 ] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "bdev", 00:22:30.788 "config": [ 00:22:30.788 { 00:22:30.788 "method": "bdev_set_options", 00:22:30.788 "params": { 00:22:30.788 "bdev_io_pool_size": 65535, 00:22:30.788 "bdev_io_cache_size": 256, 00:22:30.788 "bdev_auto_examine": true, 00:22:30.788 "iobuf_small_cache_size": 128, 00:22:30.788 "iobuf_large_cache_size": 16 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "bdev_raid_set_options", 00:22:30.788 "params": { 00:22:30.788 "process_window_size_kb": 1024 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "bdev_iscsi_set_options", 00:22:30.788 "params": { 00:22:30.788 "timeout_sec": 30 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "bdev_nvme_set_options", 00:22:30.788 "params": { 00:22:30.788 "action_on_timeout": "none", 00:22:30.788 "timeout_us": 0, 00:22:30.788 "timeout_admin_us": 0, 00:22:30.788 "keep_alive_timeout_ms": 10000, 00:22:30.788 "arbitration_burst": 0, 00:22:30.788 "low_priority_weight": 0, 00:22:30.788 "medium_priority_weight": 0, 00:22:30.788 "high_priority_weight": 0, 00:22:30.788 "nvme_adminq_poll_period_us": 10000, 00:22:30.788 "nvme_ioq_poll_period_us": 0, 00:22:30.788 "io_queue_requests": 0, 00:22:30.788 "delay_cmd_submit": true, 00:22:30.788 "transport_retry_count": 4, 00:22:30.788 "bdev_retry_count": 3, 00:22:30.788 "transport_ack_timeout": 0, 00:22:30.788 "ctrlr_loss_timeout_sec": 0, 00:22:30.788 "reconnect_delay_sec": 0, 00:22:30.788 "fast_io_fail_timeout_sec": 0, 00:22:30.788 "disable_auto_failback": false, 00:22:30.788 "generate_uuids": false, 00:22:30.788 "transport_tos": 0, 00:22:30.788 "nvme_error_stat": false, 00:22:30.788 "rdma_srq_size": 0, 00:22:30.788 "io_path_stat": false, 00:22:30.788 "allow_accel_sequence": false, 00:22:30.788 "rdma_max_cq_size": 0, 00:22:30.788 "rdma_cm_event_timeout_ms": 0, 00:22:30.788 "dhchap_digests": [ 00:22:30.788 "sha256", 00:22:30.788 "sha384", 00:22:30.788 "sha512" 00:22:30.788 ], 00:22:30.788 "dhchap_dhgroups": [ 00:22:30.788 "null", 00:22:30.788 "ffdhe2048", 00:22:30.788 "ffdhe3072", 00:22:30.788 "ffdhe4096", 00:22:30.788 "ffdhe6144", 00:22:30.788 "ffdhe8192" 00:22:30.788 ] 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "bdev_nvme_set_hotplug", 00:22:30.788 "params": { 00:22:30.788 "period_us": 100000, 00:22:30.788 "enable": false 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "bdev_malloc_create", 00:22:30.788 "params": { 00:22:30.788 "name": "malloc0", 00:22:30.788 "num_blocks": 8192, 00:22:30.788 "block_size": 4096, 00:22:30.788 "physical_block_size": 4096, 00:22:30.788 "uuid": "49c14244-e643-416f-8b30-e10db898eb3c", 00:22:30.788 "optimal_io_boundary": 0 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "bdev_wait_for_examine" 00:22:30.788 } 00:22:30.788 ] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "nbd", 00:22:30.788 "config": [] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "scheduler", 00:22:30.788 "config": [ 00:22:30.788 { 00:22:30.788 "method": "framework_set_scheduler", 00:22:30.788 "params": { 00:22:30.788 "name": "static" 00:22:30.788 } 00:22:30.788 } 00:22:30.788 ] 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "subsystem": "nvmf", 00:22:30.788 "config": [ 00:22:30.788 { 00:22:30.788 "method": "nvmf_set_config", 00:22:30.788 "params": { 00:22:30.788 "discovery_filter": "match_any", 00:22:30.788 "admin_cmd_passthru": { 00:22:30.788 "identify_ctrlr": false 00:22:30.788 } 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "nvmf_set_max_subsystems", 00:22:30.788 "params": { 00:22:30.788 "max_subsystems": 1024 00:22:30.788 } 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "method": "nvmf_set_crdt", 00:22:30.788 "params": { 00:22:30.788 "crdt1": 0, 00:22:30.789 "crdt2": 0, 00:22:30.789 "crdt3": 0 00:22:30.789 } 00:22:30.789 }, 00:22:30.789 { 00:22:30.789 "method": "nvmf_create_transport", 00:22:30.789 "params": { 00:22:30.789 "trtype": "TCP", 00:22:30.789 "max_queue_depth": 128, 00:22:30.789 "max_io_qpairs_per_ctrlr": 127, 00:22:30.789 "in_capsule_data_size": 4096, 00:22:30.789 "max_io_size": 131072, 00:22:30.789 "io_unit_size": 131072, 00:22:30.789 "max_aq_depth": 128, 00:22:30.789 "num_shared_buffers": 511, 00:22:30.789 "buf_cache_size": 4294967295, 00:22:30.789 "dif_insert_or_strip": false, 00:22:30.789 "zcopy": false, 00:22:30.789 "c2h_success": false, 00:22:30.789 "sock_priority": 0, 00:22:30.789 "abort_timeout_sec": 1, 00:22:30.789 "ack_timeout": 0, 00:22:30.789 "data_wr_pool_size": 0 00:22:30.789 } 00:22:30.789 }, 00:22:30.789 { 00:22:30.789 "method": "nvmf_create_subsystem", 00:22:30.789 "params": { 00:22:30.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.789 "allow_any_host": false, 00:22:30.789 "serial_number": "SPDK00000000000001", 00:22:30.789 "model_number": "SPDK bdev Controller", 00:22:30.789 "max_namespaces": 10, 00:22:30.789 "min_cntlid": 1, 00:22:30.789 "max_cntlid": 65519, 00:22:30.789 "ana_reporting": false 00:22:30.789 } 00:22:30.789 }, 00:22:30.789 { 00:22:30.789 "method": "nvmf_subsystem_add_host", 00:22:30.789 "params": { 00:22:30.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.789 "host": "nqn.2016-06.io.spdk:host1", 00:22:30.789 "psk": "/tmp/tmp.ZinUbPIZll" 00:22:30.789 } 00:22:30.789 }, 00:22:30.789 { 00:22:30.789 "method": "nvmf_subsystem_add_ns", 00:22:30.789 "params": { 00:22:30.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.789 "namespace": { 00:22:30.789 "nsid": 1, 00:22:30.789 "bdev_name": "malloc0", 00:22:30.789 "nguid": "49C14244E643416F8B30E10DB898EB3C", 00:22:30.789 "uuid": "49c14244-e643-416f-8b30-e10db898eb3c", 00:22:30.789 "no_auto_visible": false 00:22:30.789 } 00:22:30.789 } 00:22:30.789 }, 00:22:30.789 { 00:22:30.789 "method": "nvmf_subsystem_add_listener", 00:22:30.789 "params": { 00:22:30.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.789 "listen_address": { 00:22:30.789 "trtype": "TCP", 00:22:30.789 "adrfam": "IPv4", 00:22:30.789 "traddr": "10.0.0.2", 00:22:30.789 "trsvcid": "4420" 00:22:30.789 }, 00:22:30.789 "secure_channel": true 00:22:30.789 } 00:22:30.789 } 00:22:30.789 ] 00:22:30.789 } 00:22:30.789 ] 00:22:30.789 }' 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4093045 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4093045 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4093045 ']' 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:30.789 01:50:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.789 [2024-05-15 01:50:54.536259] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:30.789 [2024-05-15 01:50:54.536349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.789 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.789 [2024-05-15 01:50:54.611925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.789 [2024-05-15 01:50:54.691398] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.789 [2024-05-15 01:50:54.691467] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.789 [2024-05-15 01:50:54.691490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.789 [2024-05-15 01:50:54.691518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.789 [2024-05-15 01:50:54.691528] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.789 [2024-05-15 01:50:54.691611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.047 [2024-05-15 01:50:54.915198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.047 [2024-05-15 01:50:54.931142] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:31.047 [2024-05-15 01:50:54.947160] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:31.047 [2024-05-15 01:50:54.947248] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:31.047 [2024-05-15 01:50:54.955457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4093194 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4093194 /var/tmp/bdevperf.sock 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4093194 ']' 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.613 01:50:55 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:31.613 "subsystems": [ 00:22:31.613 { 00:22:31.613 "subsystem": "keyring", 00:22:31.613 "config": [] 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "subsystem": "iobuf", 00:22:31.613 "config": [ 00:22:31.613 { 00:22:31.613 "method": "iobuf_set_options", 00:22:31.613 "params": { 00:22:31.613 "small_pool_count": 8192, 00:22:31.613 "large_pool_count": 1024, 00:22:31.613 "small_bufsize": 8192, 00:22:31.613 "large_bufsize": 135168 00:22:31.613 } 00:22:31.613 } 00:22:31.613 ] 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "subsystem": "sock", 00:22:31.613 "config": [ 00:22:31.613 { 00:22:31.613 "method": "sock_impl_set_options", 00:22:31.613 "params": { 00:22:31.613 "impl_name": "posix", 00:22:31.613 "recv_buf_size": 2097152, 00:22:31.613 "send_buf_size": 2097152, 00:22:31.613 "enable_recv_pipe": true, 00:22:31.613 "enable_quickack": false, 00:22:31.613 "enable_placement_id": 0, 00:22:31.613 "enable_zerocopy_send_server": true, 00:22:31.613 "enable_zerocopy_send_client": false, 00:22:31.613 "zerocopy_threshold": 0, 00:22:31.613 "tls_version": 0, 00:22:31.613 "enable_ktls": false 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "method": "sock_impl_set_options", 00:22:31.613 "params": { 00:22:31.613 "impl_name": "ssl", 00:22:31.613 "recv_buf_size": 4096, 00:22:31.613 "send_buf_size": 4096, 00:22:31.613 "enable_recv_pipe": true, 00:22:31.613 "enable_quickack": false, 00:22:31.613 "enable_placement_id": 0, 00:22:31.613 "enable_zerocopy_send_server": true, 00:22:31.613 "enable_zerocopy_send_client": false, 00:22:31.613 "zerocopy_threshold": 0, 00:22:31.613 "tls_version": 0, 00:22:31.613 "enable_ktls": false 00:22:31.613 } 00:22:31.613 } 00:22:31.613 ] 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "subsystem": "vmd", 00:22:31.613 "config": [] 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "subsystem": "accel", 00:22:31.613 "config": [ 00:22:31.613 { 00:22:31.613 "method": "accel_set_options", 00:22:31.613 "params": { 00:22:31.613 "small_cache_size": 128, 00:22:31.613 "large_cache_size": 16, 00:22:31.613 "task_count": 2048, 00:22:31.613 "sequence_count": 2048, 00:22:31.613 "buf_count": 2048 00:22:31.613 } 00:22:31.613 } 00:22:31.613 ] 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "subsystem": "bdev", 00:22:31.613 "config": [ 00:22:31.613 { 00:22:31.613 "method": "bdev_set_options", 00:22:31.613 "params": { 00:22:31.613 "bdev_io_pool_size": 65535, 00:22:31.613 "bdev_io_cache_size": 256, 00:22:31.613 "bdev_auto_examine": true, 00:22:31.613 "iobuf_small_cache_size": 128, 00:22:31.613 "iobuf_large_cache_size": 16 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "method": "bdev_raid_set_options", 00:22:31.613 "params": { 00:22:31.613 "process_window_size_kb": 1024 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "method": "bdev_iscsi_set_options", 00:22:31.613 "params": { 00:22:31.613 "timeout_sec": 30 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "method": "bdev_nvme_set_options", 00:22:31.613 "params": { 00:22:31.613 "action_on_timeout": "none", 00:22:31.613 "timeout_us": 0, 00:22:31.613 "timeout_admin_us": 0, 00:22:31.613 "keep_alive_timeout_ms": 10000, 00:22:31.613 "arbitration_burst": 0, 00:22:31.613 "low_priority_weight": 0, 00:22:31.613 "medium_priority_weight": 0, 00:22:31.613 "high_priority_weight": 0, 00:22:31.613 "nvme_adminq_poll_period_us": 10000, 00:22:31.613 "nvme_ioq_poll_period_us": 0, 00:22:31.613 "io_queue_requests": 512, 00:22:31.613 "delay_cmd_submit": true, 00:22:31.613 "transport_retry_count": 4, 00:22:31.613 "bdev_retry_count": 3, 00:22:31.613 "transport_ack_timeout": 0, 00:22:31.613 "ctrlr_loss_timeout_sec": 0, 00:22:31.613 "reconnect_delay_sec": 0, 00:22:31.613 "fast_io_fail_timeout_sec": 0, 00:22:31.613 "disable_auto_failback": false, 00:22:31.613 "generate_uuids": false, 00:22:31.613 "transport_tos": 0, 00:22:31.613 "nvme_error_stat": false, 00:22:31.613 "rdma_srq_size": 0, 00:22:31.613 "io_path_stat": false, 00:22:31.613 "allow_accel_sequence": false, 00:22:31.613 "rdma_max_cq_size": 0, 00:22:31.613 "rdma_cm_event_timeout_ms": 0, 00:22:31.613 "dhchap_digests": [ 00:22:31.613 "sha256", 00:22:31.613 "sha384", 00:22:31.613 "sha512" 00:22:31.613 ], 00:22:31.613 "dhchap_dhgroups": [ 00:22:31.613 "null", 00:22:31.613 "ffdhe2048", 00:22:31.613 "ffdhe3072", 00:22:31.613 "ffdhe4096", 00:22:31.613 "ffdhe6144", 00:22:31.613 "ffdhe8192" 00:22:31.613 ] 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "method": "bdev_nvme_attach_controller", 00:22:31.613 "params": { 00:22:31.613 "name": "TLSTEST", 00:22:31.613 "trtype": "TCP", 00:22:31.613 "adrfam": "IPv4", 00:22:31.613 "traddr": "10.0.0.2", 00:22:31.613 "trsvcid": "4420", 00:22:31.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.613 "prchk_reftag": false, 00:22:31.613 "prchk_guard": false, 00:22:31.613 "ctrlr_loss_timeout_sec": 0, 00:22:31.613 "reconnect_delay_sec": 0, 00:22:31.613 "fast_io_fail_timeout_sec": 0, 00:22:31.613 "psk": "/tmp/tmp.ZinUbPIZll", 00:22:31.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.613 "hdgst": false, 00:22:31.613 "ddgst": false 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.613 "method": "bdev_nvme_set_hotplug", 00:22:31.613 "params": { 00:22:31.613 "period_us": 100000, 00:22:31.613 "enable": false 00:22:31.613 } 00:22:31.613 }, 00:22:31.613 { 00:22:31.614 "method": "bdev_wait_for_examine" 00:22:31.614 } 00:22:31.614 ] 00:22:31.614 }, 00:22:31.614 { 00:22:31.614 "subsystem": "nbd", 00:22:31.614 "config": [] 00:22:31.614 } 00:22:31.614 ] 00:22:31.614 }' 00:22:31.614 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:31.614 01:50:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.872 [2024-05-15 01:50:55.581184] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:31.872 [2024-05-15 01:50:55.581297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093194 ] 00:22:31.872 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.872 [2024-05-15 01:50:55.646803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.872 [2024-05-15 01:50:55.725966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.130 [2024-05-15 01:50:55.886174] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.130 [2024-05-15 01:50:55.886354] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.695 01:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:32.695 01:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:32.695 01:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:32.952 Running I/O for 10 seconds... 00:22:42.912 00:22:42.912 Latency(us) 00:22:42.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.912 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.912 Verification LBA range: start 0x0 length 0x2000 00:22:42.912 TLSTESTn1 : 10.02 3473.68 13.57 0.00 0.00 36786.61 7815.77 36700.16 00:22:42.912 =================================================================================================================== 00:22:42.912 Total : 3473.68 13.57 0.00 0.00 36786.61 7815.77 36700.16 00:22:42.912 0 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4093194 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4093194 ']' 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4093194 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4093194 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4093194' 00:22:42.912 killing process with pid 4093194 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4093194 00:22:42.912 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.912 00:22:42.912 Latency(us) 00:22:42.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.912 =================================================================================================================== 00:22:42.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.912 [2024-05-15 01:51:06.767272] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:42.912 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4093194 00:22:43.170 01:51:06 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4093045 00:22:43.170 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4093045 ']' 00:22:43.170 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4093045 00:22:43.170 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:43.170 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:43.170 01:51:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4093045 00:22:43.170 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:43.170 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:43.170 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4093045' 00:22:43.170 killing process with pid 4093045 00:22:43.170 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4093045 00:22:43.170 [2024-05-15 01:51:07.021450] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:43.170 [2024-05-15 01:51:07.021538] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.170 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4093045 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4094641 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4094641 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4094641 ']' 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:43.428 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.428 [2024-05-15 01:51:07.306738] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:43.428 [2024-05-15 01:51:07.306829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.428 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.686 [2024-05-15 01:51:07.383806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.686 [2024-05-15 01:51:07.467572] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.687 [2024-05-15 01:51:07.467625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.687 [2024-05-15 01:51:07.467662] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.687 [2024-05-15 01:51:07.467676] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.687 [2024-05-15 01:51:07.467686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.687 [2024-05-15 01:51:07.467711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ZinUbPIZll 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ZinUbPIZll 00:22:43.687 01:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.945 [2024-05-15 01:51:07.843989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.945 01:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.202 01:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.460 [2024-05-15 01:51:08.353320] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:44.460 [2024-05-15 01:51:08.353440] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.460 [2024-05-15 01:51:08.353691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.460 01:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.718 malloc0 00:22:44.718 01:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.976 01:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZinUbPIZll 00:22:45.233 [2024-05-15 01:51:09.119335] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4095419 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4095419 /var/tmp/bdevperf.sock 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4095419 ']' 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:45.233 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.491 [2024-05-15 01:51:09.174601] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:45.491 [2024-05-15 01:51:09.174678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4095419 ] 00:22:45.491 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.491 [2024-05-15 01:51:09.242761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.491 [2024-05-15 01:51:09.324703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.749 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:45.749 01:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:45.749 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZinUbPIZll 00:22:45.749 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:46.008 [2024-05-15 01:51:09.887329] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.266 nvme0n1 00:22:46.266 01:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.266 Running I/O for 1 seconds... 00:22:47.195 00:22:47.195 Latency(us) 00:22:47.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.195 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.195 Verification LBA range: start 0x0 length 0x2000 00:22:47.195 nvme0n1 : 1.02 3274.40 12.79 0.00 0.00 38728.03 6407.96 52040.44 00:22:47.195 =================================================================================================================== 00:22:47.195 Total : 3274.40 12.79 0.00 0.00 38728.03 6407.96 52040.44 00:22:47.195 0 00:22:47.195 01:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4095419 00:22:47.196 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4095419 ']' 00:22:47.196 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4095419 00:22:47.196 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:47.196 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:47.196 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4095419 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4095419' 00:22:47.453 killing process with pid 4095419 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4095419 00:22:47.453 Received shutdown signal, test time was about 1.000000 seconds 00:22:47.453 00:22:47.453 Latency(us) 00:22:47.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.453 =================================================================================================================== 00:22:47.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4095419 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4094641 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4094641 ']' 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4094641 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:47.453 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4094641 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4094641' 00:22:47.711 killing process with pid 4094641 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4094641 00:22:47.711 [2024-05-15 01:51:11.387008] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:47.711 [2024-05-15 01:51:11.387066] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4094641 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4095707 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4095707 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4095707 ']' 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:47.711 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.969 [2024-05-15 01:51:11.687428] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:47.969 [2024-05-15 01:51:11.687502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.969 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.969 [2024-05-15 01:51:11.764974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.969 [2024-05-15 01:51:11.848709] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.969 [2024-05-15 01:51:11.848771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.969 [2024-05-15 01:51:11.848787] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.969 [2024-05-15 01:51:11.848800] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.969 [2024-05-15 01:51:11.848812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.969 [2024-05-15 01:51:11.848855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.226 01:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.226 [2024-05-15 01:51:12.000061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.226 malloc0 00:22:48.226 [2024-05-15 01:51:12.032799] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:48.226 [2024-05-15 01:51:12.032895] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.226 [2024-05-15 01:51:12.033142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4095732 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4095732 /var/tmp/bdevperf.sock 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4095732 ']' 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:48.226 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.226 [2024-05-15 01:51:12.101855] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:48.226 [2024-05-15 01:51:12.101931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4095732 ] 00:22:48.226 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.483 [2024-05-15 01:51:12.172720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.483 [2024-05-15 01:51:12.260285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.483 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:48.483 01:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:48.483 01:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZinUbPIZll 00:22:48.768 01:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:49.048 [2024-05-15 01:51:12.857142] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.048 nvme0n1 00:22:49.048 01:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.306 Running I/O for 1 seconds... 00:22:50.238 00:22:50.238 Latency(us) 00:22:50.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.238 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:50.238 Verification LBA range: start 0x0 length 0x2000 00:22:50.238 nvme0n1 : 1.02 3293.80 12.87 0.00 0.00 38461.85 6213.78 33593.27 00:22:50.239 =================================================================================================================== 00:22:50.239 Total : 3293.80 12.87 0.00 0.00 38461.85 6213.78 33593.27 00:22:50.239 0 00:22:50.239 01:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:50.239 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.239 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.496 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.496 01:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:50.496 "subsystems": [ 00:22:50.496 { 00:22:50.496 "subsystem": "keyring", 00:22:50.496 "config": [ 00:22:50.496 { 00:22:50.496 "method": "keyring_file_add_key", 00:22:50.496 "params": { 00:22:50.496 "name": "key0", 00:22:50.496 "path": "/tmp/tmp.ZinUbPIZll" 00:22:50.496 } 00:22:50.496 } 00:22:50.496 ] 00:22:50.496 }, 00:22:50.496 { 00:22:50.496 "subsystem": "iobuf", 00:22:50.496 "config": [ 00:22:50.496 { 00:22:50.496 "method": "iobuf_set_options", 00:22:50.496 "params": { 00:22:50.496 "small_pool_count": 8192, 00:22:50.496 "large_pool_count": 1024, 00:22:50.496 "small_bufsize": 8192, 00:22:50.496 "large_bufsize": 135168 00:22:50.496 } 00:22:50.496 } 00:22:50.496 ] 00:22:50.496 }, 00:22:50.496 { 00:22:50.496 "subsystem": "sock", 00:22:50.496 "config": [ 00:22:50.496 { 00:22:50.496 "method": "sock_impl_set_options", 00:22:50.496 "params": { 00:22:50.496 "impl_name": "posix", 00:22:50.496 "recv_buf_size": 2097152, 00:22:50.496 "send_buf_size": 2097152, 00:22:50.496 "enable_recv_pipe": true, 00:22:50.496 "enable_quickack": false, 00:22:50.497 "enable_placement_id": 0, 00:22:50.497 "enable_zerocopy_send_server": true, 00:22:50.497 "enable_zerocopy_send_client": false, 00:22:50.497 "zerocopy_threshold": 0, 00:22:50.497 "tls_version": 0, 00:22:50.497 "enable_ktls": false 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "sock_impl_set_options", 00:22:50.497 "params": { 00:22:50.497 "impl_name": "ssl", 00:22:50.497 "recv_buf_size": 4096, 00:22:50.497 "send_buf_size": 4096, 00:22:50.497 "enable_recv_pipe": true, 00:22:50.497 "enable_quickack": false, 00:22:50.497 "enable_placement_id": 0, 00:22:50.497 "enable_zerocopy_send_server": true, 00:22:50.497 "enable_zerocopy_send_client": false, 00:22:50.497 "zerocopy_threshold": 0, 00:22:50.497 "tls_version": 0, 00:22:50.497 "enable_ktls": false 00:22:50.497 } 00:22:50.497 } 00:22:50.497 ] 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "subsystem": "vmd", 00:22:50.497 "config": [] 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "subsystem": "accel", 00:22:50.497 "config": [ 00:22:50.497 { 00:22:50.497 "method": "accel_set_options", 00:22:50.497 "params": { 00:22:50.497 "small_cache_size": 128, 00:22:50.497 "large_cache_size": 16, 00:22:50.497 "task_count": 2048, 00:22:50.497 "sequence_count": 2048, 00:22:50.497 "buf_count": 2048 00:22:50.497 } 00:22:50.497 } 00:22:50.497 ] 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "subsystem": "bdev", 00:22:50.497 "config": [ 00:22:50.497 { 00:22:50.497 "method": "bdev_set_options", 00:22:50.497 "params": { 00:22:50.497 "bdev_io_pool_size": 65535, 00:22:50.497 "bdev_io_cache_size": 256, 00:22:50.497 "bdev_auto_examine": true, 00:22:50.497 "iobuf_small_cache_size": 128, 00:22:50.497 "iobuf_large_cache_size": 16 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "bdev_raid_set_options", 00:22:50.497 "params": { 00:22:50.497 "process_window_size_kb": 1024 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "bdev_iscsi_set_options", 00:22:50.497 "params": { 00:22:50.497 "timeout_sec": 30 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "bdev_nvme_set_options", 00:22:50.497 "params": { 00:22:50.497 "action_on_timeout": "none", 00:22:50.497 "timeout_us": 0, 00:22:50.497 "timeout_admin_us": 0, 00:22:50.497 "keep_alive_timeout_ms": 10000, 00:22:50.497 "arbitration_burst": 0, 00:22:50.497 "low_priority_weight": 0, 00:22:50.497 "medium_priority_weight": 0, 00:22:50.497 "high_priority_weight": 0, 00:22:50.497 "nvme_adminq_poll_period_us": 10000, 00:22:50.497 "nvme_ioq_poll_period_us": 0, 00:22:50.497 "io_queue_requests": 0, 00:22:50.497 "delay_cmd_submit": true, 00:22:50.497 "transport_retry_count": 4, 00:22:50.497 "bdev_retry_count": 3, 00:22:50.497 "transport_ack_timeout": 0, 00:22:50.497 "ctrlr_loss_timeout_sec": 0, 00:22:50.497 "reconnect_delay_sec": 0, 00:22:50.497 "fast_io_fail_timeout_sec": 0, 00:22:50.497 "disable_auto_failback": false, 00:22:50.497 "generate_uuids": false, 00:22:50.497 "transport_tos": 0, 00:22:50.497 "nvme_error_stat": false, 00:22:50.497 "rdma_srq_size": 0, 00:22:50.497 "io_path_stat": false, 00:22:50.497 "allow_accel_sequence": false, 00:22:50.497 "rdma_max_cq_size": 0, 00:22:50.497 "rdma_cm_event_timeout_ms": 0, 00:22:50.497 "dhchap_digests": [ 00:22:50.497 "sha256", 00:22:50.497 "sha384", 00:22:50.497 "sha512" 00:22:50.497 ], 00:22:50.497 "dhchap_dhgroups": [ 00:22:50.497 "null", 00:22:50.497 "ffdhe2048", 00:22:50.497 "ffdhe3072", 00:22:50.497 "ffdhe4096", 00:22:50.497 "ffdhe6144", 00:22:50.497 "ffdhe8192" 00:22:50.497 ] 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "bdev_nvme_set_hotplug", 00:22:50.497 "params": { 00:22:50.497 "period_us": 100000, 00:22:50.497 "enable": false 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "bdev_malloc_create", 00:22:50.497 "params": { 00:22:50.497 "name": "malloc0", 00:22:50.497 "num_blocks": 8192, 00:22:50.497 "block_size": 4096, 00:22:50.497 "physical_block_size": 4096, 00:22:50.497 "uuid": "c2b6efa1-88c6-40f1-9657-44653d02d0a3", 00:22:50.497 "optimal_io_boundary": 0 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "bdev_wait_for_examine" 00:22:50.497 } 00:22:50.497 ] 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "subsystem": "nbd", 00:22:50.497 "config": [] 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "subsystem": "scheduler", 00:22:50.497 "config": [ 00:22:50.497 { 00:22:50.497 "method": "framework_set_scheduler", 00:22:50.497 "params": { 00:22:50.497 "name": "static" 00:22:50.497 } 00:22:50.497 } 00:22:50.497 ] 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "subsystem": "nvmf", 00:22:50.497 "config": [ 00:22:50.497 { 00:22:50.497 "method": "nvmf_set_config", 00:22:50.497 "params": { 00:22:50.497 "discovery_filter": "match_any", 00:22:50.497 "admin_cmd_passthru": { 00:22:50.497 "identify_ctrlr": false 00:22:50.497 } 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_set_max_subsystems", 00:22:50.497 "params": { 00:22:50.497 "max_subsystems": 1024 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_set_crdt", 00:22:50.497 "params": { 00:22:50.497 "crdt1": 0, 00:22:50.497 "crdt2": 0, 00:22:50.497 "crdt3": 0 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_create_transport", 00:22:50.497 "params": { 00:22:50.497 "trtype": "TCP", 00:22:50.497 "max_queue_depth": 128, 00:22:50.497 "max_io_qpairs_per_ctrlr": 127, 00:22:50.497 "in_capsule_data_size": 4096, 00:22:50.497 "max_io_size": 131072, 00:22:50.497 "io_unit_size": 131072, 00:22:50.497 "max_aq_depth": 128, 00:22:50.497 "num_shared_buffers": 511, 00:22:50.497 "buf_cache_size": 4294967295, 00:22:50.497 "dif_insert_or_strip": false, 00:22:50.497 "zcopy": false, 00:22:50.497 "c2h_success": false, 00:22:50.497 "sock_priority": 0, 00:22:50.497 "abort_timeout_sec": 1, 00:22:50.497 "ack_timeout": 0, 00:22:50.497 "data_wr_pool_size": 0 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_create_subsystem", 00:22:50.497 "params": { 00:22:50.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.497 "allow_any_host": false, 00:22:50.497 "serial_number": "00000000000000000000", 00:22:50.497 "model_number": "SPDK bdev Controller", 00:22:50.497 "max_namespaces": 32, 00:22:50.497 "min_cntlid": 1, 00:22:50.497 "max_cntlid": 65519, 00:22:50.497 "ana_reporting": false 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_subsystem_add_host", 00:22:50.497 "params": { 00:22:50.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.497 "host": "nqn.2016-06.io.spdk:host1", 00:22:50.497 "psk": "key0" 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_subsystem_add_ns", 00:22:50.497 "params": { 00:22:50.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.497 "namespace": { 00:22:50.497 "nsid": 1, 00:22:50.497 "bdev_name": "malloc0", 00:22:50.497 "nguid": "C2B6EFA188C640F1965744653D02D0A3", 00:22:50.497 "uuid": "c2b6efa1-88c6-40f1-9657-44653d02d0a3", 00:22:50.497 "no_auto_visible": false 00:22:50.497 } 00:22:50.497 } 00:22:50.497 }, 00:22:50.497 { 00:22:50.497 "method": "nvmf_subsystem_add_listener", 00:22:50.497 "params": { 00:22:50.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.497 "listen_address": { 00:22:50.497 "trtype": "TCP", 00:22:50.497 "adrfam": "IPv4", 00:22:50.497 "traddr": "10.0.0.2", 00:22:50.497 "trsvcid": "4420" 00:22:50.497 }, 00:22:50.497 "secure_channel": true 00:22:50.497 } 00:22:50.497 } 00:22:50.497 ] 00:22:50.498 } 00:22:50.498 ] 00:22:50.498 }' 00:22:50.498 01:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:50.755 01:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:50.755 "subsystems": [ 00:22:50.755 { 00:22:50.755 "subsystem": "keyring", 00:22:50.755 "config": [ 00:22:50.755 { 00:22:50.755 "method": "keyring_file_add_key", 00:22:50.755 "params": { 00:22:50.755 "name": "key0", 00:22:50.755 "path": "/tmp/tmp.ZinUbPIZll" 00:22:50.755 } 00:22:50.755 } 00:22:50.755 ] 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "subsystem": "iobuf", 00:22:50.755 "config": [ 00:22:50.755 { 00:22:50.755 "method": "iobuf_set_options", 00:22:50.755 "params": { 00:22:50.755 "small_pool_count": 8192, 00:22:50.755 "large_pool_count": 1024, 00:22:50.755 "small_bufsize": 8192, 00:22:50.755 "large_bufsize": 135168 00:22:50.755 } 00:22:50.755 } 00:22:50.755 ] 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "subsystem": "sock", 00:22:50.755 "config": [ 00:22:50.755 { 00:22:50.755 "method": "sock_impl_set_options", 00:22:50.755 "params": { 00:22:50.755 "impl_name": "posix", 00:22:50.755 "recv_buf_size": 2097152, 00:22:50.755 "send_buf_size": 2097152, 00:22:50.755 "enable_recv_pipe": true, 00:22:50.755 "enable_quickack": false, 00:22:50.755 "enable_placement_id": 0, 00:22:50.755 "enable_zerocopy_send_server": true, 00:22:50.755 "enable_zerocopy_send_client": false, 00:22:50.755 "zerocopy_threshold": 0, 00:22:50.755 "tls_version": 0, 00:22:50.755 "enable_ktls": false 00:22:50.755 } 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "method": "sock_impl_set_options", 00:22:50.755 "params": { 00:22:50.755 "impl_name": "ssl", 00:22:50.755 "recv_buf_size": 4096, 00:22:50.755 "send_buf_size": 4096, 00:22:50.755 "enable_recv_pipe": true, 00:22:50.755 "enable_quickack": false, 00:22:50.755 "enable_placement_id": 0, 00:22:50.755 "enable_zerocopy_send_server": true, 00:22:50.755 "enable_zerocopy_send_client": false, 00:22:50.755 "zerocopy_threshold": 0, 00:22:50.755 "tls_version": 0, 00:22:50.755 "enable_ktls": false 00:22:50.755 } 00:22:50.755 } 00:22:50.755 ] 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "subsystem": "vmd", 00:22:50.755 "config": [] 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "subsystem": "accel", 00:22:50.755 "config": [ 00:22:50.755 { 00:22:50.755 "method": "accel_set_options", 00:22:50.755 "params": { 00:22:50.755 "small_cache_size": 128, 00:22:50.755 "large_cache_size": 16, 00:22:50.755 "task_count": 2048, 00:22:50.755 "sequence_count": 2048, 00:22:50.755 "buf_count": 2048 00:22:50.755 } 00:22:50.755 } 00:22:50.755 ] 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "subsystem": "bdev", 00:22:50.755 "config": [ 00:22:50.755 { 00:22:50.755 "method": "bdev_set_options", 00:22:50.755 "params": { 00:22:50.755 "bdev_io_pool_size": 65535, 00:22:50.755 "bdev_io_cache_size": 256, 00:22:50.755 "bdev_auto_examine": true, 00:22:50.755 "iobuf_small_cache_size": 128, 00:22:50.755 "iobuf_large_cache_size": 16 00:22:50.755 } 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "method": "bdev_raid_set_options", 00:22:50.755 "params": { 00:22:50.755 "process_window_size_kb": 1024 00:22:50.755 } 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "method": "bdev_iscsi_set_options", 00:22:50.755 "params": { 00:22:50.755 "timeout_sec": 30 00:22:50.755 } 00:22:50.755 }, 00:22:50.755 { 00:22:50.755 "method": "bdev_nvme_set_options", 00:22:50.755 "params": { 00:22:50.755 "action_on_timeout": "none", 00:22:50.756 "timeout_us": 0, 00:22:50.756 "timeout_admin_us": 0, 00:22:50.756 "keep_alive_timeout_ms": 10000, 00:22:50.756 "arbitration_burst": 0, 00:22:50.756 "low_priority_weight": 0, 00:22:50.756 "medium_priority_weight": 0, 00:22:50.756 "high_priority_weight": 0, 00:22:50.756 "nvme_adminq_poll_period_us": 10000, 00:22:50.756 "nvme_ioq_poll_period_us": 0, 00:22:50.756 "io_queue_requests": 512, 00:22:50.756 "delay_cmd_submit": true, 00:22:50.756 "transport_retry_count": 4, 00:22:50.756 "bdev_retry_count": 3, 00:22:50.756 "transport_ack_timeout": 0, 00:22:50.756 "ctrlr_loss_timeout_sec": 0, 00:22:50.756 "reconnect_delay_sec": 0, 00:22:50.756 "fast_io_fail_timeout_sec": 0, 00:22:50.756 "disable_auto_failback": false, 00:22:50.756 "generate_uuids": false, 00:22:50.756 "transport_tos": 0, 00:22:50.756 "nvme_error_stat": false, 00:22:50.756 "rdma_srq_size": 0, 00:22:50.756 "io_path_stat": false, 00:22:50.756 "allow_accel_sequence": false, 00:22:50.756 "rdma_max_cq_size": 0, 00:22:50.756 "rdma_cm_event_timeout_ms": 0, 00:22:50.756 "dhchap_digests": [ 00:22:50.756 "sha256", 00:22:50.756 "sha384", 00:22:50.756 "sha512" 00:22:50.756 ], 00:22:50.756 "dhchap_dhgroups": [ 00:22:50.756 "null", 00:22:50.756 "ffdhe2048", 00:22:50.756 "ffdhe3072", 00:22:50.756 "ffdhe4096", 00:22:50.756 "ffdhe6144", 00:22:50.756 "ffdhe8192" 00:22:50.756 ] 00:22:50.756 } 00:22:50.756 }, 00:22:50.756 { 00:22:50.756 "method": "bdev_nvme_attach_controller", 00:22:50.756 "params": { 00:22:50.756 "name": "nvme0", 00:22:50.756 "trtype": "TCP", 00:22:50.756 "adrfam": "IPv4", 00:22:50.756 "traddr": "10.0.0.2", 00:22:50.756 "trsvcid": "4420", 00:22:50.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.756 "prchk_reftag": false, 00:22:50.756 "prchk_guard": false, 00:22:50.756 "ctrlr_loss_timeout_sec": 0, 00:22:50.756 "reconnect_delay_sec": 0, 00:22:50.756 "fast_io_fail_timeout_sec": 0, 00:22:50.756 "psk": "key0", 00:22:50.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.756 "hdgst": false, 00:22:50.756 "ddgst": false 00:22:50.756 } 00:22:50.756 }, 00:22:50.756 { 00:22:50.756 "method": "bdev_nvme_set_hotplug", 00:22:50.756 "params": { 00:22:50.756 "period_us": 100000, 00:22:50.756 "enable": false 00:22:50.756 } 00:22:50.756 }, 00:22:50.756 { 00:22:50.756 "method": "bdev_enable_histogram", 00:22:50.756 "params": { 00:22:50.756 "name": "nvme0n1", 00:22:50.756 "enable": true 00:22:50.756 } 00:22:50.756 }, 00:22:50.756 { 00:22:50.756 "method": "bdev_wait_for_examine" 00:22:50.756 } 00:22:50.756 ] 00:22:50.756 }, 00:22:50.756 { 00:22:50.756 "subsystem": "nbd", 00:22:50.756 "config": [] 00:22:50.756 } 00:22:50.756 ] 00:22:50.756 }' 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4095732 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4095732 ']' 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4095732 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4095732 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4095732' 00:22:50.756 killing process with pid 4095732 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4095732 00:22:50.756 Received shutdown signal, test time was about 1.000000 seconds 00:22:50.756 00:22:50.756 Latency(us) 00:22:50.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.756 =================================================================================================================== 00:22:50.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.756 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4095732 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4095707 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4095707 ']' 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4095707 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4095707 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4095707' 00:22:51.013 killing process with pid 4095707 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4095707 00:22:51.013 [2024-05-15 01:51:14.821535] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:51.013 01:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4095707 00:22:51.270 01:51:15 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:51.270 01:51:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.270 01:51:15 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:51.270 "subsystems": [ 00:22:51.270 { 00:22:51.270 "subsystem": "keyring", 00:22:51.270 "config": [ 00:22:51.270 { 00:22:51.270 "method": "keyring_file_add_key", 00:22:51.270 "params": { 00:22:51.270 "name": "key0", 00:22:51.270 "path": "/tmp/tmp.ZinUbPIZll" 00:22:51.270 } 00:22:51.270 } 00:22:51.270 ] 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "subsystem": "iobuf", 00:22:51.270 "config": [ 00:22:51.270 { 00:22:51.270 "method": "iobuf_set_options", 00:22:51.270 "params": { 00:22:51.270 "small_pool_count": 8192, 00:22:51.270 "large_pool_count": 1024, 00:22:51.270 "small_bufsize": 8192, 00:22:51.270 "large_bufsize": 135168 00:22:51.270 } 00:22:51.270 } 00:22:51.270 ] 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "subsystem": "sock", 00:22:51.270 "config": [ 00:22:51.270 { 00:22:51.270 "method": "sock_impl_set_options", 00:22:51.270 "params": { 00:22:51.270 "impl_name": "posix", 00:22:51.270 "recv_buf_size": 2097152, 00:22:51.270 "send_buf_size": 2097152, 00:22:51.270 "enable_recv_pipe": true, 00:22:51.270 "enable_quickack": false, 00:22:51.270 "enable_placement_id": 0, 00:22:51.270 "enable_zerocopy_send_server": true, 00:22:51.270 "enable_zerocopy_send_client": false, 00:22:51.270 "zerocopy_threshold": 0, 00:22:51.270 "tls_version": 0, 00:22:51.270 "enable_ktls": false 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "sock_impl_set_options", 00:22:51.270 "params": { 00:22:51.270 "impl_name": "ssl", 00:22:51.270 "recv_buf_size": 4096, 00:22:51.270 "send_buf_size": 4096, 00:22:51.270 "enable_recv_pipe": true, 00:22:51.270 "enable_quickack": false, 00:22:51.270 "enable_placement_id": 0, 00:22:51.270 "enable_zerocopy_send_server": true, 00:22:51.270 "enable_zerocopy_send_client": false, 00:22:51.270 "zerocopy_threshold": 0, 00:22:51.270 "tls_version": 0, 00:22:51.270 "enable_ktls": false 00:22:51.270 } 00:22:51.270 } 00:22:51.270 ] 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "subsystem": "vmd", 00:22:51.270 "config": [] 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "subsystem": "accel", 00:22:51.270 "config": [ 00:22:51.270 { 00:22:51.270 "method": "accel_set_options", 00:22:51.270 "params": { 00:22:51.270 "small_cache_size": 128, 00:22:51.270 "large_cache_size": 16, 00:22:51.270 "task_count": 2048, 00:22:51.270 "sequence_count": 2048, 00:22:51.270 "buf_count": 2048 00:22:51.270 } 00:22:51.270 } 00:22:51.270 ] 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "subsystem": "bdev", 00:22:51.270 "config": [ 00:22:51.270 { 00:22:51.270 "method": "bdev_set_options", 00:22:51.270 "params": { 00:22:51.270 "bdev_io_pool_size": 65535, 00:22:51.270 "bdev_io_cache_size": 256, 00:22:51.270 "bdev_auto_examine": true, 00:22:51.270 "iobuf_small_cache_size": 128, 00:22:51.270 "iobuf_large_cache_size": 16 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "bdev_raid_set_options", 00:22:51.270 "params": { 00:22:51.270 "process_window_size_kb": 1024 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "bdev_iscsi_set_options", 00:22:51.270 "params": { 00:22:51.270 "timeout_sec": 30 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "bdev_nvme_set_options", 00:22:51.270 "params": { 00:22:51.270 "action_on_timeout": "none", 00:22:51.270 "timeout_us": 0, 00:22:51.270 "timeout_admin_us": 0, 00:22:51.270 "keep_alive_timeout_ms": 10000, 00:22:51.270 "arbitration_burst": 0, 00:22:51.270 "low_priority_weight": 0, 00:22:51.270 "medium_priority_weight": 0, 00:22:51.270 "high_priority_weight": 0, 00:22:51.270 "nvme_adminq_poll_period_us": 10000, 00:22:51.270 "nvme_ioq_poll_period_us": 0, 00:22:51.270 "io_queue_requests": 0, 00:22:51.270 "delay_cmd_submit": true, 00:22:51.270 "transport_retry_count": 4, 00:22:51.270 "bdev_retry_count": 3, 00:22:51.270 "transport_ack_timeout": 0, 00:22:51.270 "ctrlr_loss_timeout_sec": 0, 00:22:51.270 "reconnect_delay_sec": 0, 00:22:51.270 "fast_io_fail_timeout_sec": 0, 00:22:51.270 "disable_auto_failback": false, 00:22:51.270 "generate_uuids": false, 00:22:51.270 "transport_tos": 0, 00:22:51.270 "nvme_error_stat": false, 00:22:51.270 "rdma_srq_size": 0, 00:22:51.270 "io_path_stat": false, 00:22:51.270 "allow_accel_sequence": false, 00:22:51.270 "rdma_max_cq_size": 0, 00:22:51.270 "rdma_cm_event_timeout_ms": 0, 00:22:51.270 "dhchap_digests": [ 00:22:51.270 "sha256", 00:22:51.270 "sha384", 00:22:51.270 "sha512" 00:22:51.270 ], 00:22:51.270 "dhchap_dhgroups": [ 00:22:51.270 "null", 00:22:51.270 "ffdhe2048", 00:22:51.270 "ffdhe3072", 00:22:51.270 "ffdhe4096", 00:22:51.270 "ffdhe6144", 00:22:51.270 "ffdhe8192" 00:22:51.270 ] 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "bdev_nvme_set_hotplug", 00:22:51.270 "params": { 00:22:51.270 "period_us": 100000, 00:22:51.270 "enable": false 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "bdev_malloc_create", 00:22:51.270 "params": { 00:22:51.270 "name": "malloc0", 00:22:51.270 "num_blocks": 8192, 00:22:51.270 "block_size": 4096, 00:22:51.270 "physical_block_size": 4096, 00:22:51.270 "uuid": "c2b6efa1-88c6-40f1-9657-44653d02d0a3", 00:22:51.270 "optimal_io_boundary": 0 00:22:51.270 } 00:22:51.270 }, 00:22:51.270 { 00:22:51.270 "method": "bdev_wait_for_examine" 00:22:51.270 } 00:22:51.270 ] 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "subsystem": "nbd", 00:22:51.271 "config": [] 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "subsystem": "scheduler", 00:22:51.271 "config": [ 00:22:51.271 { 00:22:51.271 "method": "framework_set_scheduler", 00:22:51.271 "params": { 00:22:51.271 "name": "static" 00:22:51.271 } 00:22:51.271 } 00:22:51.271 ] 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "subsystem": "nvmf", 00:22:51.271 "config": [ 00:22:51.271 { 00:22:51.271 "method": "nvmf_set_config", 00:22:51.271 "params": { 00:22:51.271 "discovery_filter": "match_any", 00:22:51.271 "admin_cmd_passthru": { 00:22:51.271 "identify_ctrlr": false 00:22:51.271 } 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_set_max_subsystems", 00:22:51.271 "params": { 00:22:51.271 "max_subsystems": 1024 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_set_crdt", 00:22:51.271 "params": { 00:22:51.271 "crdt1": 0, 00:22:51.271 "crdt2": 0, 00:22:51.271 "crdt3": 0 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_create_transport", 00:22:51.271 "params": { 00:22:51.271 "trtype": "TCP", 00:22:51.271 "max_queue_depth": 128, 00:22:51.271 "max_io_qpairs_per_ctrlr": 127, 00:22:51.271 "in_capsule_data_size": 4096, 00:22:51.271 "max_io_size": 131072, 00:22:51.271 "io_unit_size": 131072, 00:22:51.271 "max_aq_depth": 128, 00:22:51.271 "num_shared_buffers": 511, 00:22:51.271 "buf_cache_size": 4294967295, 00:22:51.271 "dif_insert_or_strip": false, 00:22:51.271 "zcopy": false, 00:22:51.271 "c2h_success": false, 00:22:51.271 "sock_priority": 0, 00:22:51.271 "abort_timeout_sec": 1, 00:22:51.271 "ack_timeout": 0, 00:22:51.271 "data_wr_pool_size": 0 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_create_subsystem", 00:22:51.271 "params": { 00:22:51.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.271 "allow_any_host": false, 00:22:51.271 "serial_number": "00000000000000000000", 00:22:51.271 "model_number": "SPDK bdev Controller", 00:22:51.271 "max_namespaces": 32, 00:22:51.271 "min_cntlid": 1, 00:22:51.271 "max_cntlid": 65519, 00:22:51.271 "ana_reporting": false 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_subsystem_add_host", 00:22:51.271 "params": { 00:22:51.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.271 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.271 "psk": "key0" 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_subsystem_add_ns", 00:22:51.271 "params": { 00:22:51.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.271 "namespace": { 00:22:51.271 "nsid": 1, 00:22:51.271 "bdev_name": "malloc0", 00:22:51.271 "nguid": "C2B6EFA188C640F1965744653D02D0A3", 00:22:51.271 "uuid": "c2b6efa1-88c6-40f1-9657-44653d02d0a3", 00:22:51.271 "no_auto_visible": false 00:22:51.271 } 00:22:51.271 } 00:22:51.271 }, 00:22:51.271 { 00:22:51.271 "method": "nvmf_subsystem_add_listener", 00:22:51.271 "params": { 00:22:51.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.271 "listen_address": { 00:22:51.271 "trtype": "TCP", 00:22:51.271 "adrfam": "IPv4", 00:22:51.271 "traddr": "10.0.0.2", 00:22:51.271 "trsvcid": "4420" 00:22:51.271 }, 00:22:51.271 "secure_channel": true 00:22:51.271 } 00:22:51.271 } 00:22:51.271 ] 00:22:51.271 } 00:22:51.271 ] 00:22:51.271 }' 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4096146 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4096146 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4096146 ']' 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:51.271 01:51:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.271 [2024-05-15 01:51:15.114583] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:51.271 [2024-05-15 01:51:15.114666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.271 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.271 [2024-05-15 01:51:15.191868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.528 [2024-05-15 01:51:15.273641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.528 [2024-05-15 01:51:15.273705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.528 [2024-05-15 01:51:15.273722] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.528 [2024-05-15 01:51:15.273735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.528 [2024-05-15 01:51:15.273755] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.528 [2024-05-15 01:51:15.273844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.786 [2024-05-15 01:51:15.510846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.786 [2024-05-15 01:51:15.542810] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:51.786 [2024-05-15 01:51:15.542887] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.786 [2024-05-15 01:51:15.553457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.352 01:51:16 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4096297 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4096297 /var/tmp/bdevperf.sock 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 4096297 ']' 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:52.353 01:51:16 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:52.353 "subsystems": [ 00:22:52.353 { 00:22:52.353 "subsystem": "keyring", 00:22:52.353 "config": [ 00:22:52.353 { 00:22:52.353 "method": "keyring_file_add_key", 00:22:52.353 "params": { 00:22:52.353 "name": "key0", 00:22:52.353 "path": "/tmp/tmp.ZinUbPIZll" 00:22:52.353 } 00:22:52.353 } 00:22:52.353 ] 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "subsystem": "iobuf", 00:22:52.353 "config": [ 00:22:52.353 { 00:22:52.353 "method": "iobuf_set_options", 00:22:52.353 "params": { 00:22:52.353 "small_pool_count": 8192, 00:22:52.353 "large_pool_count": 1024, 00:22:52.353 "small_bufsize": 8192, 00:22:52.353 "large_bufsize": 135168 00:22:52.353 } 00:22:52.353 } 00:22:52.353 ] 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "subsystem": "sock", 00:22:52.353 "config": [ 00:22:52.353 { 00:22:52.353 "method": "sock_impl_set_options", 00:22:52.353 "params": { 00:22:52.353 "impl_name": "posix", 00:22:52.353 "recv_buf_size": 2097152, 00:22:52.353 "send_buf_size": 2097152, 00:22:52.353 "enable_recv_pipe": true, 00:22:52.353 "enable_quickack": false, 00:22:52.353 "enable_placement_id": 0, 00:22:52.353 "enable_zerocopy_send_server": true, 00:22:52.353 "enable_zerocopy_send_client": false, 00:22:52.353 "zerocopy_threshold": 0, 00:22:52.353 "tls_version": 0, 00:22:52.353 "enable_ktls": false 00:22:52.353 } 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "method": "sock_impl_set_options", 00:22:52.353 "params": { 00:22:52.353 "impl_name": "ssl", 00:22:52.353 "recv_buf_size": 4096, 00:22:52.353 "send_buf_size": 4096, 00:22:52.353 "enable_recv_pipe": true, 00:22:52.353 "enable_quickack": false, 00:22:52.353 "enable_placement_id": 0, 00:22:52.353 "enable_zerocopy_send_server": true, 00:22:52.353 "enable_zerocopy_send_client": false, 00:22:52.353 "zerocopy_threshold": 0, 00:22:52.353 "tls_version": 0, 00:22:52.353 "enable_ktls": false 00:22:52.353 } 00:22:52.353 } 00:22:52.353 ] 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "subsystem": "vmd", 00:22:52.353 "config": [] 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "subsystem": "accel", 00:22:52.353 "config": [ 00:22:52.353 { 00:22:52.353 "method": "accel_set_options", 00:22:52.353 "params": { 00:22:52.353 "small_cache_size": 128, 00:22:52.353 "large_cache_size": 16, 00:22:52.353 "task_count": 2048, 00:22:52.353 "sequence_count": 2048, 00:22:52.353 "buf_count": 2048 00:22:52.353 } 00:22:52.353 } 00:22:52.353 ] 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "subsystem": "bdev", 00:22:52.353 "config": [ 00:22:52.353 { 00:22:52.353 "method": "bdev_set_options", 00:22:52.353 "params": { 00:22:52.353 "bdev_io_pool_size": 65535, 00:22:52.353 "bdev_io_cache_size": 256, 00:22:52.353 "bdev_auto_examine": true, 00:22:52.353 "iobuf_small_cache_size": 128, 00:22:52.353 "iobuf_large_cache_size": 16 00:22:52.353 } 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "method": "bdev_raid_set_options", 00:22:52.353 "params": { 00:22:52.353 "process_window_size_kb": 1024 00:22:52.353 } 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "method": "bdev_iscsi_set_options", 00:22:52.353 "params": { 00:22:52.353 "timeout_sec": 30 00:22:52.353 } 00:22:52.353 }, 00:22:52.353 { 00:22:52.353 "method": "bdev_nvme_set_options", 00:22:52.353 "params": { 00:22:52.353 "action_on_timeout": "none", 00:22:52.353 "timeout_us": 0, 00:22:52.353 "timeout_admin_us": 0, 00:22:52.353 "keep_alive_timeout_ms": 10000, 00:22:52.353 "arbitration_burst": 0, 00:22:52.353 "low_priority_weight": 0, 00:22:52.353 "medium_priority_weight": 0, 00:22:52.353 "high_priority_weight": 0, 00:22:52.353 "nvme_adminq_poll_period_us": 10000, 00:22:52.353 "nvme_ioq_poll_period_us": 0, 00:22:52.353 "io_queue_requests": 512, 00:22:52.353 "delay_cmd_submit": true, 00:22:52.353 "transport_retry_count": 4, 00:22:52.353 "bdev_retry_count": 3, 00:22:52.353 "transport_ack_timeout": 0, 00:22:52.353 "ctrlr_loss_timeout_sec": 0, 00:22:52.353 "reconnect_delay_sec": 0, 00:22:52.353 "fast_io_fail_timeout_sec": 0, 00:22:52.353 "disable_auto_failback": false, 00:22:52.353 "generate_uuids": false, 00:22:52.353 "transport_tos": 0, 00:22:52.353 "nvme_error_stat": false, 00:22:52.353 "rdma_srq_size": 0, 00:22:52.354 "io_path_stat": false, 00:22:52.354 "allow_accel_sequence": false, 00:22:52.354 "rdma_max_cq_size": 0, 00:22:52.354 "rdma_cm_event_timeout_ms": 0, 00:22:52.354 "dhchap_digests": [ 00:22:52.354 "sha256", 00:22:52.354 "sha384", 00:22:52.354 "sha512" 00:22:52.354 ], 00:22:52.354 "dhchap_dhgroups": [ 00:22:52.354 "null", 00:22:52.354 "ffdhe2048", 00:22:52.354 "ffdhe3072", 00:22:52.354 "ffdhe4096", 00:22:52.354 "ffdhe6144", 00:22:52.354 "ffdhe8192" 00:22:52.354 ] 00:22:52.354 } 00:22:52.354 }, 00:22:52.354 { 00:22:52.354 "method": "bdev_nvme_attach_controller", 00:22:52.354 "params": { 00:22:52.354 "name": "nvme0", 00:22:52.354 "trtype": "TCP", 00:22:52.354 "adrfam": "IPv4", 00:22:52.354 "traddr": "10.0.0.2", 00:22:52.354 "trsvcid": "4420", 00:22:52.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.354 "prchk_reftag": false, 00:22:52.354 "prchk_guard": false, 00:22:52.354 "ctrlr_loss_timeout_sec": 0, 00:22:52.354 "reconnect_delay_sec": 0, 00:22:52.354 "fast_io_fail_timeout_sec": 0, 00:22:52.354 "psk": "key0", 00:22:52.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.354 "hdgst": false, 00:22:52.354 "ddgst": false 00:22:52.354 } 00:22:52.354 }, 00:22:52.354 { 00:22:52.354 "method": "bdev_nvme_set_hotplug", 00:22:52.354 "params": { 00:22:52.354 "period_us": 100000, 00:22:52.354 "enable": false 00:22:52.354 } 00:22:52.354 }, 00:22:52.354 { 00:22:52.354 "method": "bdev_enable_histogram", 00:22:52.354 "params": { 00:22:52.354 "name": "nvme0n1", 00:22:52.354 "enable": true 00:22:52.354 } 00:22:52.354 }, 00:22:52.354 { 00:22:52.354 "method": "bdev_wait_for_examine" 00:22:52.354 } 00:22:52.354 ] 00:22:52.354 }, 00:22:52.354 { 00:22:52.354 "subsystem": "nbd", 00:22:52.354 "config": [] 00:22:52.354 } 00:22:52.354 ] 00:22:52.354 }' 00:22:52.354 01:51:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.354 [2024-05-15 01:51:16.143365] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:22:52.354 [2024-05-15 01:51:16.143442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096297 ] 00:22:52.354 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.354 [2024-05-15 01:51:16.209869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.612 [2024-05-15 01:51:16.292419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.612 [2024-05-15 01:51:16.456013] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.543 01:51:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:53.543 01:51:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:53.544 01:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.544 01:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:53.544 01:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.544 01:51:17 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.801 Running I/O for 1 seconds... 00:22:54.736 00:22:54.736 Latency(us) 00:22:54.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.736 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:54.736 Verification LBA range: start 0x0 length 0x2000 00:22:54.736 nvme0n1 : 1.03 3217.88 12.57 0.00 0.00 39204.61 6990.51 48156.82 00:22:54.736 =================================================================================================================== 00:22:54.736 Total : 3217.88 12.57 0.00 0.00 39204.61 6990.51 48156.82 00:22:54.736 0 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:54.736 nvmf_trace.0 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4096297 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4096297 ']' 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4096297 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:54.736 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:54.737 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4096297 00:22:54.737 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:54.737 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:54.737 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4096297' 00:22:54.737 killing process with pid 4096297 00:22:54.737 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4096297 00:22:54.737 Received shutdown signal, test time was about 1.000000 seconds 00:22:54.737 00:22:54.737 Latency(us) 00:22:54.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.737 =================================================================================================================== 00:22:54.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.737 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4096297 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.995 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.995 rmmod nvme_tcp 00:22:54.995 rmmod nvme_fabrics 00:22:55.253 rmmod nvme_keyring 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4096146 ']' 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4096146 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 4096146 ']' 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 4096146 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4096146 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4096146' 00:22:55.253 killing process with pid 4096146 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 4096146 00:22:55.253 [2024-05-15 01:51:18.975413] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:55.253 01:51:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 4096146 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.512 01:51:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.421 01:51:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:57.421 01:51:21 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.v6MwYIc7PY /tmp/tmp.4qRAmPpLqd /tmp/tmp.ZinUbPIZll 00:22:57.421 00:22:57.421 real 1m19.300s 00:22:57.421 user 2m2.859s 00:22:57.421 sys 0m27.230s 00:22:57.421 01:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:57.421 01:51:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.421 ************************************ 00:22:57.421 END TEST nvmf_tls 00:22:57.421 ************************************ 00:22:57.421 01:51:21 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:57.421 01:51:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:57.421 01:51:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:57.421 01:51:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:57.421 ************************************ 00:22:57.421 START TEST nvmf_fips 00:22:57.421 ************************************ 00:22:57.421 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:57.681 * Looking for test storage... 00:22:57.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:57.681 01:51:21 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:22:57.682 Error setting digest 00:22:57.682 00C23F236A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:57.682 00C23F236A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:57.682 01:51:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:00.210 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.210 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.210 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.210 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.210 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.210 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:00.211 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:00.211 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:00.211 Found net devices under 0000:09:00.0: cvl_0_0 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:00.211 Found net devices under 0000:09:00.1: cvl_0_1 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.211 01:51:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:23:00.211 00:23:00.211 --- 10.0.0.2 ping statistics --- 00:23:00.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.211 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:23:00.211 00:23:00.211 --- 10.0.0.1 ping statistics --- 00:23:00.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.211 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4098943 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4098943 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 4098943 ']' 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:00.211 01:51:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:00.470 [2024-05-15 01:51:24.189699] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:00.470 [2024-05-15 01:51:24.189801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.470 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.470 [2024-05-15 01:51:24.266879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.470 [2024-05-15 01:51:24.350523] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.470 [2024-05-15 01:51:24.350587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.470 [2024-05-15 01:51:24.350615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.470 [2024-05-15 01:51:24.350627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.470 [2024-05-15 01:51:24.350637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.470 [2024-05-15 01:51:24.350665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:01.403 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:01.661 [2024-05-15 01:51:25.419397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.661 [2024-05-15 01:51:25.435351] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:01.661 [2024-05-15 01:51:25.435430] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.661 [2024-05-15 01:51:25.435665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.661 [2024-05-15 01:51:25.466993] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:01.661 malloc0 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4099100 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4099100 /var/tmp/bdevperf.sock 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 4099100 ']' 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:01.661 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.661 [2024-05-15 01:51:25.556717] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:01.661 [2024-05-15 01:51:25.556794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099100 ] 00:23:01.661 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.919 [2024-05-15 01:51:25.624102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.920 [2024-05-15 01:51:25.705806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.920 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:01.920 01:51:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:23:01.920 01:51:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:02.177 [2024-05-15 01:51:26.093098] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.177 [2024-05-15 01:51:26.093253] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:02.435 TLSTESTn1 00:23:02.435 01:51:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.435 Running I/O for 10 seconds... 00:23:12.429 00:23:12.429 Latency(us) 00:23:12.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.429 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:12.429 Verification LBA range: start 0x0 length 0x2000 00:23:12.430 TLSTESTn1 : 10.02 3451.79 13.48 0.00 0.00 37019.07 6941.96 51263.72 00:23:12.430 =================================================================================================================== 00:23:12.430 Total : 3451.79 13.48 0.00 0.00 37019.07 6941.96 51263.72 00:23:12.701 0 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:12.701 nvmf_trace.0 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4099100 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 4099100 ']' 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 4099100 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4099100 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4099100' 00:23:12.701 killing process with pid 4099100 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 4099100 00:23:12.701 Received shutdown signal, test time was about 10.000000 seconds 00:23:12.701 00:23:12.701 Latency(us) 00:23:12.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.701 =================================================================================================================== 00:23:12.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.701 [2024-05-15 01:51:36.441440] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:12.701 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 4099100 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.959 rmmod nvme_tcp 00:23:12.959 rmmod nvme_fabrics 00:23:12.959 rmmod nvme_keyring 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4098943 ']' 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4098943 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 4098943 ']' 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 4098943 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4098943 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4098943' 00:23:12.959 killing process with pid 4098943 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 4098943 00:23:12.959 [2024-05-15 01:51:36.719955] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:12.959 [2024-05-15 01:51:36.720002] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:12.959 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 4098943 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.218 01:51:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.121 01:51:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:15.121 01:51:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:15.121 00:23:15.121 real 0m17.654s 00:23:15.121 user 0m22.183s 00:23:15.121 sys 0m5.821s 00:23:15.121 01:51:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:15.121 01:51:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:15.121 ************************************ 00:23:15.121 END TEST nvmf_fips 00:23:15.121 ************************************ 00:23:15.121 01:51:39 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:15.121 01:51:39 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:15.121 01:51:39 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:15.121 01:51:39 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:15.121 01:51:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.121 ************************************ 00:23:15.121 START TEST nvmf_fuzz 00:23:15.121 ************************************ 00:23:15.121 01:51:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:15.381 * Looking for test storage... 00:23:15.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.381 01:51:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:17.913 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:17.913 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:17.913 Found net devices under 0000:09:00.0: cvl_0_0 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:17.913 Found net devices under 0000:09:00.1: cvl_0_1 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.913 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:23:17.914 00:23:17.914 --- 10.0.0.2 ping statistics --- 00:23:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.914 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:23:17.914 00:23:17.914 --- 10.0.0.1 ping statistics --- 00:23:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.914 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4102644 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4102644 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@828 -- # '[' -z 4102644 ']' 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:17.914 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@861 -- # return 0 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.172 01:51:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.172 Malloc0 00:23:18.172 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.172 01:51:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.172 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.172 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:18.173 01:51:42 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:50.235 Fuzzing completed. Shutting down the fuzz application 00:23:50.235 00:23:50.235 Dumping successful admin opcodes: 00:23:50.235 8, 9, 10, 24, 00:23:50.235 Dumping successful io opcodes: 00:23:50.235 0, 9, 00:23:50.236 NS: 0x200003aeff00 I/O qp, Total commands completed: 432108, total successful commands: 2524, random_seed: 3791447360 00:23:50.236 NS: 0x200003aeff00 admin qp, Total commands completed: 54128, total successful commands: 435, random_seed: 2987736768 00:23:50.236 01:52:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:50.236 Fuzzing completed. Shutting down the fuzz application 00:23:50.236 00:23:50.236 Dumping successful admin opcodes: 00:23:50.236 24, 00:23:50.236 Dumping successful io opcodes: 00:23:50.236 00:23:50.236 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3797365793 00:23:50.236 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3797497021 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.236 rmmod nvme_tcp 00:23:50.236 rmmod nvme_fabrics 00:23:50.236 rmmod nvme_keyring 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4102644 ']' 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4102644 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@947 -- # '[' -z 4102644 ']' 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # kill -0 4102644 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # uname 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4102644 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4102644' 00:23:50.236 killing process with pid 4102644 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # kill 4102644 00:23:50.236 01:52:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@971 -- # wait 4102644 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.494 01:52:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.397 01:52:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.397 01:52:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:52.397 00:23:52.397 real 0m37.219s 00:23:52.397 user 0m49.477s 00:23:52.397 sys 0m15.597s 00:23:52.397 01:52:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:52.397 01:52:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:52.397 ************************************ 00:23:52.397 END TEST nvmf_fuzz 00:23:52.397 ************************************ 00:23:52.397 01:52:16 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:52.397 01:52:16 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:52.397 01:52:16 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:52.397 01:52:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.397 ************************************ 00:23:52.397 START TEST nvmf_multiconnection 00:23:52.397 ************************************ 00:23:52.397 01:52:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:52.656 * Looking for test storage... 00:23:52.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.656 01:52:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:55.184 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:55.184 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.184 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:55.185 Found net devices under 0000:09:00.0: cvl_0_0 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:55.185 Found net devices under 0000:09:00.1: cvl_0_1 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:55.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:23:55.185 00:23:55.185 --- 10.0.0.2 ping statistics --- 00:23:55.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.185 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:23:55.185 00:23:55.185 --- 10.0.0.1 ping statistics --- 00:23:55.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.185 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4108673 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4108673 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@828 -- # '[' -z 4108673 ']' 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:55.185 01:52:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.185 [2024-05-15 01:52:18.971652] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:23:55.185 [2024-05-15 01:52:18.971745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.185 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.185 [2024-05-15 01:52:19.050595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.443 [2024-05-15 01:52:19.139288] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.443 [2024-05-15 01:52:19.139345] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.443 [2024-05-15 01:52:19.139370] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.443 [2024-05-15 01:52:19.139383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.443 [2024-05-15 01:52:19.139395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.443 [2024-05-15 01:52:19.139480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.443 [2024-05-15 01:52:19.139535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.443 [2024-05-15 01:52:19.139884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.443 [2024-05-15 01:52:19.139888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@861 -- # return 0 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 [2024-05-15 01:52:19.302919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 Malloc1 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.443 [2024-05-15 01:52:19.359114] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:55.443 [2024-05-15 01:52:19.359460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.443 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 Malloc2 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 Malloc3 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 Malloc4 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 Malloc5 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 Malloc6 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:55.702 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.703 Malloc7 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.703 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.961 Malloc8 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.961 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 Malloc9 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 Malloc10 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 Malloc11 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:55.962 01:52:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:56.528 01:52:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:56.528 01:52:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:23:56.528 01:52:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:56.528 01:52:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:23:56.528 01:52:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK1 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.056 01:52:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:59.314 01:52:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:59.314 01:52:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:23:59.314 01:52:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.314 01:52:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:23:59.314 01:52:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK2 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.243 01:52:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:01.809 01:52:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:01.809 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:01.809 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:01.809 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:01.809 01:52:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK3 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.336 01:52:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:04.594 01:52:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:04.594 01:52:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:04.594 01:52:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.594 01:52:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:04.594 01:52:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK4 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.118 01:52:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:07.376 01:52:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:07.376 01:52:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:07.376 01:52:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.376 01:52:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:07.376 01:52:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK5 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.298 01:52:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:09.861 01:52:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:09.861 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:09.861 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.861 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:09.861 01:52:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK6 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.760 01:52:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:12.692 01:52:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:12.692 01:52:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:12.692 01:52:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.692 01:52:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:12.692 01:52:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK7 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.587 01:52:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:15.519 01:52:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:15.519 01:52:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:15.519 01:52:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:15.519 01:52:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:15.519 01:52:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK8 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.417 01:52:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:18.350 01:52:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:18.350 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:18.350 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.350 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:18.350 01:52:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK9 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.247 01:52:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:21.180 01:52:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:21.180 01:52:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:21.180 01:52:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.180 01:52:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:21.180 01:52:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK10 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.078 01:52:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:24.069 01:52:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:24.069 01:52:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:24.069 01:52:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.069 01:52:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:24.069 01:52:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK11 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:25.967 01:52:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:25.967 [global] 00:24:25.967 thread=1 00:24:25.967 invalidate=1 00:24:25.967 rw=read 00:24:25.967 time_based=1 00:24:25.967 runtime=10 00:24:25.967 ioengine=libaio 00:24:25.967 direct=1 00:24:25.967 bs=262144 00:24:25.967 iodepth=64 00:24:25.967 norandommap=1 00:24:25.967 numjobs=1 00:24:25.967 00:24:25.967 [job0] 00:24:25.967 filename=/dev/nvme0n1 00:24:25.967 [job1] 00:24:25.967 filename=/dev/nvme10n1 00:24:25.967 [job2] 00:24:25.967 filename=/dev/nvme1n1 00:24:25.967 [job3] 00:24:25.967 filename=/dev/nvme2n1 00:24:25.967 [job4] 00:24:25.967 filename=/dev/nvme3n1 00:24:25.967 [job5] 00:24:25.967 filename=/dev/nvme4n1 00:24:25.967 [job6] 00:24:25.967 filename=/dev/nvme5n1 00:24:25.967 [job7] 00:24:25.967 filename=/dev/nvme6n1 00:24:25.967 [job8] 00:24:25.967 filename=/dev/nvme7n1 00:24:25.967 [job9] 00:24:25.967 filename=/dev/nvme8n1 00:24:25.967 [job10] 00:24:25.967 filename=/dev/nvme9n1 00:24:26.225 Could not set queue depth (nvme0n1) 00:24:26.225 Could not set queue depth (nvme10n1) 00:24:26.225 Could not set queue depth (nvme1n1) 00:24:26.225 Could not set queue depth (nvme2n1) 00:24:26.225 Could not set queue depth (nvme3n1) 00:24:26.225 Could not set queue depth (nvme4n1) 00:24:26.225 Could not set queue depth (nvme5n1) 00:24:26.225 Could not set queue depth (nvme6n1) 00:24:26.225 Could not set queue depth (nvme7n1) 00:24:26.225 Could not set queue depth (nvme8n1) 00:24:26.225 Could not set queue depth (nvme9n1) 00:24:26.225 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:26.225 fio-3.35 00:24:26.225 Starting 11 threads 00:24:38.431 00:24:38.431 job0: (groupid=0, jobs=1): err= 0: pid=4112789: Wed May 15 01:53:00 2024 00:24:38.431 read: IOPS=462, BW=116MiB/s (121MB/s)(1170MiB/10124msec) 00:24:38.431 slat (usec): min=12, max=90854, avg=1906.01, stdev=5962.28 00:24:38.431 clat (msec): min=32, max=277, avg=136.45, stdev=49.43 00:24:38.431 lat (msec): min=33, max=315, avg=138.35, stdev=50.28 00:24:38.431 clat percentiles (msec): 00:24:38.431 | 1.00th=[ 46], 5.00th=[ 63], 10.00th=[ 73], 20.00th=[ 89], 00:24:38.431 | 30.00th=[ 105], 40.00th=[ 117], 50.00th=[ 130], 60.00th=[ 148], 00:24:38.431 | 70.00th=[ 169], 80.00th=[ 188], 90.00th=[ 207], 95.00th=[ 220], 00:24:38.431 | 99.00th=[ 232], 99.50th=[ 241], 99.90th=[ 264], 99.95th=[ 264], 00:24:38.431 | 99.99th=[ 279] 00:24:38.431 bw ( KiB/s): min=68096, max=213504, per=6.62%, avg=118144.00, stdev=42100.13, samples=20 00:24:38.431 iops : min= 266, max= 834, avg=461.50, stdev=164.45, samples=20 00:24:38.431 lat (msec) : 50=1.80%, 100=24.93%, 250=73.07%, 500=0.21% 00:24:38.431 cpu : usr=0.25%, sys=1.66%, ctx=904, majf=0, minf=4097 00:24:38.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:38.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.431 issued rwts: total=4678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.431 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.431 job1: (groupid=0, jobs=1): err= 0: pid=4112790: Wed May 15 01:53:00 2024 00:24:38.431 read: IOPS=626, BW=157MiB/s (164MB/s)(1593MiB/10166msec) 00:24:38.431 slat (usec): min=9, max=113085, avg=1244.38, stdev=5521.08 00:24:38.431 clat (usec): min=717, max=327993, avg=100778.25, stdev=63229.08 00:24:38.431 lat (usec): min=764, max=328021, avg=102022.64, stdev=64222.82 00:24:38.431 clat percentiles (msec): 00:24:38.431 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 34], 00:24:38.431 | 30.00th=[ 56], 40.00th=[ 78], 50.00th=[ 100], 60.00th=[ 115], 00:24:38.431 | 70.00th=[ 132], 80.00th=[ 163], 90.00th=[ 197], 95.00th=[ 211], 00:24:38.431 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 292], 99.95th=[ 300], 00:24:38.431 | 99.99th=[ 330] 00:24:38.431 bw ( KiB/s): min=74240, max=453120, per=9.04%, avg=161459.20, stdev=87021.34, samples=20 00:24:38.431 iops : min= 290, max= 1770, avg=630.70, stdev=339.93, samples=20 00:24:38.431 lat (usec) : 750=0.03%, 1000=0.11% 00:24:38.431 lat (msec) : 2=0.39%, 4=1.07%, 10=2.81%, 20=5.20%, 50=18.12% 00:24:38.431 lat (msec) : 100=22.78%, 250=49.36%, 500=0.14% 00:24:38.431 cpu : usr=0.34%, sys=1.61%, ctx=1214, majf=0, minf=4097 00:24:38.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:38.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=6370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job2: (groupid=0, jobs=1): err= 0: pid=4112791: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=853, BW=213MiB/s (224MB/s)(2140MiB/10031msec) 00:24:38.432 slat (usec): min=8, max=164473, avg=797.69, stdev=4537.98 00:24:38.432 clat (usec): min=966, max=372930, avg=74123.09, stdev=57272.85 00:24:38.432 lat (usec): min=985, max=372952, avg=74920.78, stdev=57847.73 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 18], 20.00th=[ 31], 00:24:38.432 | 30.00th=[ 34], 40.00th=[ 39], 50.00th=[ 51], 60.00th=[ 68], 00:24:38.432 | 70.00th=[ 104], 80.00th=[ 133], 90.00th=[ 163], 95.00th=[ 184], 00:24:38.432 | 99.00th=[ 226], 99.50th=[ 230], 99.90th=[ 245], 99.95th=[ 253], 00:24:38.432 | 99.99th=[ 372] 00:24:38.432 bw ( KiB/s): min=92160, max=471040, per=12.18%, avg=217548.80, stdev=123845.51, samples=20 00:24:38.432 iops : min= 360, max= 1840, avg=849.80, stdev=483.77, samples=20 00:24:38.432 lat (usec) : 1000=0.07% 00:24:38.432 lat (msec) : 2=0.55%, 4=1.25%, 10=5.35%, 20=3.67%, 50=38.86% 00:24:38.432 lat (msec) : 100=19.31%, 250=30.88%, 500=0.06% 00:24:38.432 cpu : usr=0.30%, sys=2.41%, ctx=1619, majf=0, minf=4097 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=8561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job3: (groupid=0, jobs=1): err= 0: pid=4112792: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=673, BW=168MiB/s (176MB/s)(1703MiB/10120msec) 00:24:38.432 slat (usec): min=9, max=95147, avg=887.26, stdev=4505.57 00:24:38.432 clat (usec): min=718, max=267553, avg=94095.06, stdev=53879.12 00:24:38.432 lat (usec): min=743, max=284905, avg=94982.33, stdev=54499.77 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 49], 00:24:38.432 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 82], 60.00th=[ 110], 00:24:38.432 | 70.00th=[ 134], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 182], 00:24:38.432 | 99.00th=[ 213], 99.50th=[ 220], 99.90th=[ 228], 99.95th=[ 228], 00:24:38.432 | 99.99th=[ 268] 00:24:38.432 bw ( KiB/s): min=88064, max=315392, per=9.68%, avg=172774.40, stdev=76225.76, samples=20 00:24:38.432 iops : min= 344, max= 1232, avg=674.90, stdev=297.76, samples=20 00:24:38.432 lat (usec) : 750=0.04%, 1000=0.06% 00:24:38.432 lat (msec) : 2=0.18%, 4=1.01%, 10=1.82%, 20=4.87%, 50=13.61% 00:24:38.432 lat (msec) : 100=35.70%, 250=42.69%, 500=0.01% 00:24:38.432 cpu : usr=0.21%, sys=1.71%, ctx=1223, majf=0, minf=3722 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=6812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job4: (groupid=0, jobs=1): err= 0: pid=4112793: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=763, BW=191MiB/s (200MB/s)(1924MiB/10075msec) 00:24:38.432 slat (usec): min=8, max=128463, avg=969.51, stdev=4135.17 00:24:38.432 clat (usec): min=965, max=277944, avg=82720.07, stdev=50587.97 00:24:38.432 lat (usec): min=1012, max=335896, avg=83689.58, stdev=51295.84 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 38], 00:24:38.432 | 30.00th=[ 51], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 82], 00:24:38.432 | 70.00th=[ 97], 80.00th=[ 128], 90.00th=[ 163], 95.00th=[ 184], 00:24:38.432 | 99.00th=[ 222], 99.50th=[ 224], 99.90th=[ 232], 99.95th=[ 241], 00:24:38.432 | 99.99th=[ 279] 00:24:38.432 bw ( KiB/s): min=83968, max=400896, per=10.94%, avg=195430.40, stdev=94519.15, samples=20 00:24:38.432 iops : min= 328, max= 1566, avg=763.40, stdev=369.22, samples=20 00:24:38.432 lat (usec) : 1000=0.01% 00:24:38.432 lat (msec) : 2=0.08%, 4=0.62%, 10=1.36%, 20=3.08%, 50=24.31% 00:24:38.432 lat (msec) : 100=41.94%, 250=28.56%, 500=0.04% 00:24:38.432 cpu : usr=0.36%, sys=2.06%, ctx=1448, majf=0, minf=4097 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=7697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job5: (groupid=0, jobs=1): err= 0: pid=4112794: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=586, BW=147MiB/s (154MB/s)(1485MiB/10121msec) 00:24:38.432 slat (usec): min=9, max=125516, avg=1361.59, stdev=6516.38 00:24:38.432 clat (usec): min=812, max=308782, avg=107590.63, stdev=71651.26 00:24:38.432 lat (usec): min=838, max=308805, avg=108952.22, stdev=72830.74 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 27], 00:24:38.432 | 30.00th=[ 50], 40.00th=[ 81], 50.00th=[ 112], 60.00th=[ 131], 00:24:38.432 | 70.00th=[ 155], 80.00th=[ 180], 90.00th=[ 207], 95.00th=[ 220], 00:24:38.432 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 296], 99.95th=[ 305], 00:24:38.432 | 99.99th=[ 309] 00:24:38.432 bw ( KiB/s): min=62076, max=315392, per=8.42%, avg=150431.80, stdev=82968.74, samples=20 00:24:38.432 iops : min= 242, max= 1232, avg=587.60, stdev=324.12, samples=20 00:24:38.432 lat (usec) : 1000=0.02% 00:24:38.432 lat (msec) : 2=0.56%, 4=1.70%, 10=3.33%, 20=10.89%, 50=13.74% 00:24:38.432 lat (msec) : 100=15.19%, 250=54.10%, 500=0.47% 00:24:38.432 cpu : usr=0.28%, sys=1.66%, ctx=1165, majf=0, minf=4097 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=5939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job6: (groupid=0, jobs=1): err= 0: pid=4112795: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=712, BW=178MiB/s (187MB/s)(1796MiB/10076msec) 00:24:38.432 slat (usec): min=10, max=73126, avg=1338.31, stdev=4599.48 00:24:38.432 clat (msec): min=2, max=268, avg=88.34, stdev=57.04 00:24:38.432 lat (msec): min=2, max=273, avg=89.68, stdev=57.93 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 35], 00:24:38.432 | 30.00th=[ 48], 40.00th=[ 64], 50.00th=[ 74], 60.00th=[ 90], 00:24:38.432 | 70.00th=[ 106], 80.00th=[ 131], 90.00th=[ 188], 95.00th=[ 209], 00:24:38.432 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 253], 99.95th=[ 255], 00:24:38.432 | 99.99th=[ 271] 00:24:38.432 bw ( KiB/s): min=69632, max=505344, per=10.21%, avg=182272.00, stdev=105890.81, samples=20 00:24:38.432 iops : min= 272, max= 1974, avg=712.00, stdev=413.64, samples=20 00:24:38.432 lat (msec) : 4=0.35%, 10=0.95%, 20=1.85%, 50=28.75%, 100=35.21% 00:24:38.432 lat (msec) : 250=32.77%, 500=0.13% 00:24:38.432 cpu : usr=0.50%, sys=2.25%, ctx=1222, majf=0, minf=4097 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=7183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job7: (groupid=0, jobs=1): err= 0: pid=4112796: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=494, BW=124MiB/s (130MB/s)(1251MiB/10126msec) 00:24:38.432 slat (usec): min=8, max=75142, avg=1665.75, stdev=5458.36 00:24:38.432 clat (msec): min=3, max=300, avg=127.68, stdev=53.05 00:24:38.432 lat (msec): min=3, max=300, avg=129.35, stdev=53.88 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 15], 5.00th=[ 52], 10.00th=[ 64], 20.00th=[ 81], 00:24:38.432 | 30.00th=[ 94], 40.00th=[ 109], 50.00th=[ 121], 60.00th=[ 134], 00:24:38.432 | 70.00th=[ 161], 80.00th=[ 182], 90.00th=[ 205], 95.00th=[ 218], 00:24:38.432 | 99.00th=[ 234], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 259], 00:24:38.432 | 99.99th=[ 300] 00:24:38.432 bw ( KiB/s): min=72192, max=236544, per=7.08%, avg=126515.20, stdev=45063.32, samples=20 00:24:38.432 iops : min= 282, max= 924, avg=494.20, stdev=176.03, samples=20 00:24:38.432 lat (msec) : 4=0.04%, 10=0.42%, 20=1.04%, 50=3.28%, 100=30.15% 00:24:38.432 lat (msec) : 250=64.88%, 500=0.20% 00:24:38.432 cpu : usr=0.18%, sys=1.52%, ctx=988, majf=0, minf=4097 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=5005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job8: (groupid=0, jobs=1): err= 0: pid=4112800: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=534, BW=134MiB/s (140MB/s)(1340MiB/10029msec) 00:24:38.432 slat (usec): min=9, max=131878, avg=1659.70, stdev=6423.30 00:24:38.432 clat (usec): min=1408, max=337878, avg=118021.63, stdev=60735.20 00:24:38.432 lat (usec): min=1452, max=337894, avg=119681.33, stdev=61730.89 00:24:38.432 clat percentiles (msec): 00:24:38.432 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 61], 00:24:38.432 | 30.00th=[ 85], 40.00th=[ 103], 50.00th=[ 117], 60.00th=[ 136], 00:24:38.432 | 70.00th=[ 155], 80.00th=[ 178], 90.00th=[ 205], 95.00th=[ 215], 00:24:38.432 | 99.00th=[ 228], 99.50th=[ 232], 99.90th=[ 255], 99.95th=[ 334], 00:24:38.432 | 99.99th=[ 338] 00:24:38.432 bw ( KiB/s): min=64128, max=329728, per=7.59%, avg=135558.40, stdev=62475.92, samples=20 00:24:38.432 iops : min= 250, max= 1288, avg=529.50, stdev=244.08, samples=20 00:24:38.432 lat (msec) : 2=0.09%, 4=0.65%, 10=2.24%, 20=2.93%, 50=11.81% 00:24:38.432 lat (msec) : 100=21.13%, 250=60.99%, 500=0.15% 00:24:38.432 cpu : usr=0.35%, sys=1.82%, ctx=1067, majf=0, minf=4097 00:24:38.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:38.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.432 issued rwts: total=5358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.432 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.432 job9: (groupid=0, jobs=1): err= 0: pid=4112804: Wed May 15 01:53:00 2024 00:24:38.432 read: IOPS=480, BW=120MiB/s (126MB/s)(1216MiB/10127msec) 00:24:38.432 slat (usec): min=8, max=102560, avg=1673.22, stdev=5471.67 00:24:38.432 clat (usec): min=962, max=272856, avg=131440.79, stdev=52534.80 00:24:38.432 lat (usec): min=995, max=272905, avg=133114.01, stdev=53198.80 00:24:38.432 clat percentiles (msec): 00:24:38.433 | 1.00th=[ 3], 5.00th=[ 40], 10.00th=[ 66], 20.00th=[ 89], 00:24:38.433 | 30.00th=[ 105], 40.00th=[ 115], 50.00th=[ 129], 60.00th=[ 146], 00:24:38.433 | 70.00th=[ 161], 80.00th=[ 182], 90.00th=[ 207], 95.00th=[ 218], 00:24:38.433 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 271], 99.95th=[ 275], 00:24:38.433 | 99.99th=[ 275] 00:24:38.433 bw ( KiB/s): min=75264, max=254460, per=6.88%, avg=122931.00, stdev=43331.40, samples=20 00:24:38.433 iops : min= 294, max= 993, avg=480.15, stdev=169.11, samples=20 00:24:38.433 lat (usec) : 1000=0.02% 00:24:38.433 lat (msec) : 2=0.39%, 4=1.03%, 10=0.21%, 20=0.58%, 50=4.42% 00:24:38.433 lat (msec) : 100=19.70%, 250=73.44%, 500=0.23% 00:24:38.433 cpu : usr=0.29%, sys=1.59%, ctx=967, majf=0, minf=4097 00:24:38.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:38.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.433 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.433 job10: (groupid=0, jobs=1): err= 0: pid=4112805: Wed May 15 01:53:00 2024 00:24:38.433 read: IOPS=837, BW=209MiB/s (220MB/s)(2111MiB/10077msec) 00:24:38.433 slat (usec): min=9, max=109004, avg=984.26, stdev=3880.39 00:24:38.433 clat (msec): min=3, max=251, avg=75.32, stdev=53.05 00:24:38.433 lat (msec): min=3, max=313, avg=76.30, stdev=53.71 00:24:38.433 clat percentiles (msec): 00:24:38.433 | 1.00th=[ 17], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 33], 00:24:38.433 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 48], 60.00th=[ 72], 00:24:38.433 | 70.00th=[ 97], 80.00th=[ 127], 90.00th=[ 165], 95.00th=[ 180], 00:24:38.433 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 245], 00:24:38.433 | 99.99th=[ 253] 00:24:38.433 bw ( KiB/s): min=73216, max=464896, per=12.02%, avg=214553.60, stdev=130847.33, samples=20 00:24:38.433 iops : min= 286, max= 1816, avg=838.10, stdev=511.12, samples=20 00:24:38.433 lat (msec) : 4=0.02%, 10=0.57%, 20=0.86%, 50=50.17%, 100=18.81% 00:24:38.433 lat (msec) : 250=29.56%, 500=0.01% 00:24:38.433 cpu : usr=0.48%, sys=2.52%, ctx=1593, majf=0, minf=4097 00:24:38.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:38.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:38.433 issued rwts: total=8444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.433 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:38.433 00:24:38.433 Run status group 0 (all jobs): 00:24:38.433 READ: bw=1744MiB/s (1829MB/s), 116MiB/s-213MiB/s (121MB/s-224MB/s), io=17.3GiB (18.6GB), run=10029-10166msec 00:24:38.433 00:24:38.433 Disk stats (read/write): 00:24:38.433 nvme0n1: ios=9195/0, merge=0/0, ticks=1235618/0, in_queue=1235618, util=97.19% 00:24:38.433 nvme10n1: ios=12534/0, merge=0/0, ticks=1234200/0, in_queue=1234200, util=97.41% 00:24:38.433 nvme1n1: ios=16899/0, merge=0/0, ticks=1240570/0, in_queue=1240570, util=97.67% 00:24:38.433 nvme2n1: ios=13451/0, merge=0/0, ticks=1245430/0, in_queue=1245430, util=97.81% 00:24:38.433 nvme3n1: ios=15170/0, merge=0/0, ticks=1238128/0, in_queue=1238128, util=97.88% 00:24:38.433 nvme4n1: ios=11717/0, merge=0/0, ticks=1238649/0, in_queue=1238649, util=98.19% 00:24:38.433 nvme5n1: ios=14173/0, merge=0/0, ticks=1233834/0, in_queue=1233834, util=98.35% 00:24:38.433 nvme6n1: ios=9829/0, merge=0/0, ticks=1233936/0, in_queue=1233936, util=98.45% 00:24:38.433 nvme7n1: ios=10428/0, merge=0/0, ticks=1235652/0, in_queue=1235652, util=98.86% 00:24:38.433 nvme8n1: ios=9510/0, merge=0/0, ticks=1233990/0, in_queue=1233990, util=99.06% 00:24:38.433 nvme9n1: ios=16683/0, merge=0/0, ticks=1235765/0, in_queue=1235765, util=99.21% 00:24:38.433 01:53:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:38.433 [global] 00:24:38.433 thread=1 00:24:38.433 invalidate=1 00:24:38.433 rw=randwrite 00:24:38.433 time_based=1 00:24:38.433 runtime=10 00:24:38.433 ioengine=libaio 00:24:38.433 direct=1 00:24:38.433 bs=262144 00:24:38.433 iodepth=64 00:24:38.433 norandommap=1 00:24:38.433 numjobs=1 00:24:38.433 00:24:38.433 [job0] 00:24:38.433 filename=/dev/nvme0n1 00:24:38.433 [job1] 00:24:38.433 filename=/dev/nvme10n1 00:24:38.433 [job2] 00:24:38.433 filename=/dev/nvme1n1 00:24:38.433 [job3] 00:24:38.433 filename=/dev/nvme2n1 00:24:38.433 [job4] 00:24:38.433 filename=/dev/nvme3n1 00:24:38.433 [job5] 00:24:38.433 filename=/dev/nvme4n1 00:24:38.433 [job6] 00:24:38.433 filename=/dev/nvme5n1 00:24:38.433 [job7] 00:24:38.433 filename=/dev/nvme6n1 00:24:38.433 [job8] 00:24:38.433 filename=/dev/nvme7n1 00:24:38.433 [job9] 00:24:38.433 filename=/dev/nvme8n1 00:24:38.433 [job10] 00:24:38.433 filename=/dev/nvme9n1 00:24:38.433 Could not set queue depth (nvme0n1) 00:24:38.433 Could not set queue depth (nvme10n1) 00:24:38.433 Could not set queue depth (nvme1n1) 00:24:38.433 Could not set queue depth (nvme2n1) 00:24:38.433 Could not set queue depth (nvme3n1) 00:24:38.433 Could not set queue depth (nvme4n1) 00:24:38.433 Could not set queue depth (nvme5n1) 00:24:38.433 Could not set queue depth (nvme6n1) 00:24:38.433 Could not set queue depth (nvme7n1) 00:24:38.433 Could not set queue depth (nvme8n1) 00:24:38.433 Could not set queue depth (nvme9n1) 00:24:38.433 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:38.433 fio-3.35 00:24:38.433 Starting 11 threads 00:24:48.400 00:24:48.400 job0: (groupid=0, jobs=1): err= 0: pid=4113847: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=436, BW=109MiB/s (115MB/s)(1105MiB/10119msec); 0 zone resets 00:24:48.400 slat (usec): min=14, max=68603, avg=1730.96, stdev=4529.51 00:24:48.400 clat (msec): min=2, max=306, avg=144.68, stdev=66.94 00:24:48.400 lat (msec): min=2, max=312, avg=146.41, stdev=67.81 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 55], 20.00th=[ 88], 00:24:48.400 | 30.00th=[ 97], 40.00th=[ 123], 50.00th=[ 144], 60.00th=[ 174], 00:24:48.400 | 70.00th=[ 192], 80.00th=[ 211], 90.00th=[ 228], 95.00th=[ 243], 00:24:48.400 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 305], 00:24:48.400 | 99.99th=[ 309] 00:24:48.400 bw ( KiB/s): min=75776, max=197632, per=7.81%, avg=111579.65, stdev=34198.62, samples=20 00:24:48.400 iops : min= 296, max= 772, avg=435.85, stdev=133.58, samples=20 00:24:48.400 lat (msec) : 4=0.11%, 10=0.61%, 20=1.95%, 50=6.29%, 100=22.35% 00:24:48.400 lat (msec) : 250=64.49%, 500=4.21% 00:24:48.400 cpu : usr=1.67%, sys=1.48%, ctx=2263, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,4421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job1: (groupid=0, jobs=1): err= 0: pid=4113871: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=432, BW=108MiB/s (113MB/s)(1099MiB/10175msec); 0 zone resets 00:24:48.400 slat (usec): min=21, max=71290, avg=1728.17, stdev=4634.11 00:24:48.400 clat (usec): min=1139, max=366105, avg=146296.29, stdev=73229.11 00:24:48.400 lat (usec): min=1172, max=366152, avg=148024.46, stdev=74245.17 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 82], 00:24:48.400 | 30.00th=[ 104], 40.00th=[ 123], 50.00th=[ 144], 60.00th=[ 174], 00:24:48.400 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 239], 95.00th=[ 259], 00:24:48.400 | 99.00th=[ 300], 99.50th=[ 317], 99.90th=[ 359], 99.95th=[ 359], 00:24:48.400 | 99.99th=[ 368] 00:24:48.400 bw ( KiB/s): min=61952, max=236544, per=7.77%, avg=110962.00, stdev=41822.29, samples=20 00:24:48.400 iops : min= 242, max= 924, avg=433.40, stdev=163.36, samples=20 00:24:48.400 lat (msec) : 2=0.05%, 4=0.07%, 10=0.71%, 20=2.62%, 50=9.10% 00:24:48.400 lat (msec) : 100=16.74%, 250=64.38%, 500=6.35% 00:24:48.400 cpu : usr=1.44%, sys=1.37%, ctx=2348, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,4397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job2: (groupid=0, jobs=1): err= 0: pid=4113908: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=493, BW=123MiB/s (129MB/s)(1256MiB/10183msec); 0 zone resets 00:24:48.400 slat (usec): min=16, max=32519, avg=1536.69, stdev=3875.65 00:24:48.400 clat (msec): min=2, max=378, avg=128.13, stdev=74.71 00:24:48.400 lat (msec): min=2, max=378, avg=129.66, stdev=75.76 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 51], 00:24:48.400 | 30.00th=[ 71], 40.00th=[ 100], 50.00th=[ 124], 60.00th=[ 144], 00:24:48.400 | 70.00th=[ 169], 80.00th=[ 199], 90.00th=[ 230], 95.00th=[ 257], 00:24:48.400 | 99.00th=[ 317], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 372], 00:24:48.400 | 99.99th=[ 380] 00:24:48.400 bw ( KiB/s): min=65536, max=302080, per=8.89%, avg=126996.90, stdev=53742.15, samples=20 00:24:48.400 iops : min= 256, max= 1180, avg=496.05, stdev=209.88, samples=20 00:24:48.400 lat (msec) : 4=0.12%, 10=0.64%, 20=2.43%, 50=16.94%, 100=20.55% 00:24:48.400 lat (msec) : 250=53.69%, 500=5.63% 00:24:48.400 cpu : usr=1.69%, sys=1.65%, ctx=2435, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,5023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job3: (groupid=0, jobs=1): err= 0: pid=4113926: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=462, BW=116MiB/s (121MB/s)(1167MiB/10091msec); 0 zone resets 00:24:48.400 slat (usec): min=18, max=115073, avg=1463.55, stdev=4527.43 00:24:48.400 clat (usec): min=1307, max=343149, avg=136888.02, stdev=79224.75 00:24:48.400 lat (usec): min=1489, max=343267, avg=138351.57, stdev=80260.26 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 35], 20.00th=[ 66], 00:24:48.400 | 30.00th=[ 89], 40.00th=[ 103], 50.00th=[ 120], 60.00th=[ 150], 00:24:48.400 | 70.00th=[ 188], 80.00th=[ 220], 90.00th=[ 253], 95.00th=[ 271], 00:24:48.400 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 342], 99.95th=[ 342], 00:24:48.400 | 99.99th=[ 342] 00:24:48.400 bw ( KiB/s): min=63488, max=194560, per=8.25%, avg=117836.80, stdev=41363.33, samples=20 00:24:48.400 iops : min= 248, max= 760, avg=460.30, stdev=161.58, samples=20 00:24:48.400 lat (msec) : 2=0.09%, 4=0.88%, 10=2.08%, 20=2.64%, 50=8.38% 00:24:48.400 lat (msec) : 100=23.47%, 250=51.91%, 500=10.57% 00:24:48.400 cpu : usr=1.60%, sys=1.60%, ctx=2808, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,4666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job4: (groupid=0, jobs=1): err= 0: pid=4113938: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=451, BW=113MiB/s (118MB/s)(1150MiB/10182msec); 0 zone resets 00:24:48.400 slat (usec): min=21, max=114887, avg=1382.57, stdev=4662.73 00:24:48.400 clat (msec): min=2, max=381, avg=140.13, stdev=84.43 00:24:48.400 lat (msec): min=2, max=381, avg=141.52, stdev=85.52 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 67], 00:24:48.400 | 30.00th=[ 78], 40.00th=[ 110], 50.00th=[ 136], 60.00th=[ 150], 00:24:48.400 | 70.00th=[ 184], 80.00th=[ 213], 90.00th=[ 257], 95.00th=[ 305], 00:24:48.400 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 380], 00:24:48.400 | 99.99th=[ 384] 00:24:48.400 bw ( KiB/s): min=49152, max=210432, per=8.13%, avg=116130.85, stdev=43240.31, samples=20 00:24:48.400 iops : min= 192, max= 822, avg=453.60, stdev=168.93, samples=20 00:24:48.400 lat (msec) : 4=0.13%, 10=1.43%, 20=2.89%, 50=11.20%, 100=22.59% 00:24:48.400 lat (msec) : 250=50.76%, 500=11.00% 00:24:48.400 cpu : usr=1.30%, sys=1.65%, ctx=2858, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,4600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job5: (groupid=0, jobs=1): err= 0: pid=4113985: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=558, BW=140MiB/s (146MB/s)(1421MiB/10175msec); 0 zone resets 00:24:48.400 slat (usec): min=14, max=45129, avg=1323.03, stdev=3617.11 00:24:48.400 clat (usec): min=760, max=349277, avg=113216.11, stdev=74916.39 00:24:48.400 lat (usec): min=837, max=349344, avg=114539.14, stdev=75866.85 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 23], 20.00th=[ 48], 00:24:48.400 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 117], 00:24:48.400 | 70.00th=[ 140], 80.00th=[ 180], 90.00th=[ 228], 95.00th=[ 259], 00:24:48.400 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 342], 99.95th=[ 347], 00:24:48.400 | 99.99th=[ 351] 00:24:48.400 bw ( KiB/s): min=59392, max=236032, per=10.07%, avg=143854.50, stdev=56793.90, samples=20 00:24:48.400 iops : min= 232, max= 922, avg=561.90, stdev=221.89, samples=20 00:24:48.400 lat (usec) : 1000=0.12% 00:24:48.400 lat (msec) : 2=0.63%, 4=1.94%, 10=2.55%, 20=3.77%, 50=11.39% 00:24:48.400 lat (msec) : 100=32.91%, 250=40.29%, 500=6.41% 00:24:48.400 cpu : usr=1.76%, sys=1.69%, ctx=3048, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,5682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job6: (groupid=0, jobs=1): err= 0: pid=4113988: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=479, BW=120MiB/s (126MB/s)(1221MiB/10183msec); 0 zone resets 00:24:48.400 slat (usec): min=25, max=72185, avg=1912.29, stdev=4206.58 00:24:48.400 clat (msec): min=5, max=323, avg=130.92, stdev=55.09 00:24:48.400 lat (msec): min=7, max=323, avg=132.83, stdev=55.72 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 26], 5.00th=[ 44], 10.00th=[ 61], 20.00th=[ 88], 00:24:48.400 | 30.00th=[ 91], 40.00th=[ 109], 50.00th=[ 125], 60.00th=[ 146], 00:24:48.400 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 207], 95.00th=[ 220], 00:24:48.400 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 321], 00:24:48.400 | 99.99th=[ 326] 00:24:48.400 bw ( KiB/s): min=74901, max=224256, per=8.64%, avg=123348.25, stdev=46309.77, samples=20 00:24:48.400 iops : min= 292, max= 876, avg=481.80, stdev=180.93, samples=20 00:24:48.400 lat (msec) : 10=0.06%, 20=0.37%, 50=6.19%, 100=29.74%, 250=62.19% 00:24:48.400 lat (msec) : 500=1.45% 00:24:48.400 cpu : usr=1.50%, sys=1.56%, ctx=1569, majf=0, minf=1 00:24:48.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.400 issued rwts: total=0,4882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.400 job7: (groupid=0, jobs=1): err= 0: pid=4113989: Wed May 15 01:53:11 2024 00:24:48.400 write: IOPS=746, BW=187MiB/s (196MB/s)(1876MiB/10045msec); 0 zone resets 00:24:48.400 slat (usec): min=17, max=131489, avg=1081.48, stdev=3166.61 00:24:48.400 clat (msec): min=2, max=377, avg=84.56, stdev=56.49 00:24:48.400 lat (msec): min=2, max=377, avg=85.64, stdev=57.18 00:24:48.400 clat percentiles (msec): 00:24:48.400 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 28], 20.00th=[ 42], 00:24:48.400 | 30.00th=[ 44], 40.00th=[ 58], 50.00th=[ 75], 60.00th=[ 83], 00:24:48.400 | 70.00th=[ 104], 80.00th=[ 120], 90.00th=[ 159], 95.00th=[ 205], 00:24:48.400 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 326], 00:24:48.400 | 99.99th=[ 376] 00:24:48.400 bw ( KiB/s): min=62976, max=381952, per=13.34%, avg=190481.15, stdev=88092.89, samples=20 00:24:48.400 iops : min= 246, max= 1492, avg=744.05, stdev=344.12, samples=20 00:24:48.401 lat (msec) : 4=0.09%, 10=1.55%, 20=2.68%, 50=31.13%, 100=32.39% 00:24:48.401 lat (msec) : 250=30.21%, 500=1.95% 00:24:48.401 cpu : usr=2.40%, sys=2.71%, ctx=3350, majf=0, minf=1 00:24:48.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.401 issued rwts: total=0,7503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.401 job8: (groupid=0, jobs=1): err= 0: pid=4113996: Wed May 15 01:53:11 2024 00:24:48.401 write: IOPS=556, BW=139MiB/s (146MB/s)(1396MiB/10037msec); 0 zone resets 00:24:48.401 slat (usec): min=19, max=99413, avg=1284.06, stdev=3969.63 00:24:48.401 clat (usec): min=1296, max=330225, avg=113661.80, stdev=73625.12 00:24:48.401 lat (usec): min=1961, max=330270, avg=114945.85, stdev=74400.55 00:24:48.401 clat percentiles (msec): 00:24:48.401 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 41], 20.00th=[ 47], 00:24:48.401 | 30.00th=[ 52], 40.00th=[ 75], 50.00th=[ 93], 60.00th=[ 124], 00:24:48.401 | 70.00th=[ 150], 80.00th=[ 188], 90.00th=[ 228], 95.00th=[ 249], 00:24:48.401 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 321], 99.95th=[ 321], 00:24:48.401 | 99.99th=[ 330] 00:24:48.401 bw ( KiB/s): min=71168, max=323072, per=9.90%, avg=141363.20, stdev=72530.89, samples=20 00:24:48.401 iops : min= 278, max= 1262, avg=552.20, stdev=283.32, samples=20 00:24:48.401 lat (msec) : 2=0.05%, 4=0.41%, 10=1.90%, 20=2.95%, 50=24.08% 00:24:48.401 lat (msec) : 100=23.78%, 250=41.99%, 500=4.83% 00:24:48.401 cpu : usr=1.56%, sys=1.94%, ctx=2759, majf=0, minf=1 00:24:48.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.401 issued rwts: total=0,5585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.401 job9: (groupid=0, jobs=1): err= 0: pid=4113997: Wed May 15 01:53:11 2024 00:24:48.401 write: IOPS=451, BW=113MiB/s (118MB/s)(1144MiB/10122msec); 0 zone resets 00:24:48.401 slat (usec): min=20, max=72578, avg=1693.75, stdev=4079.71 00:24:48.401 clat (usec): min=1125, max=294648, avg=139816.94, stdev=64386.74 00:24:48.401 lat (usec): min=1196, max=294696, avg=141510.69, stdev=65152.68 00:24:48.401 clat percentiles (msec): 00:24:48.401 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 52], 20.00th=[ 87], 00:24:48.401 | 30.00th=[ 107], 40.00th=[ 118], 50.00th=[ 140], 60.00th=[ 159], 00:24:48.401 | 70.00th=[ 180], 80.00th=[ 201], 90.00th=[ 222], 95.00th=[ 247], 00:24:48.401 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 292], 00:24:48.401 | 99.99th=[ 296] 00:24:48.401 bw ( KiB/s): min=65536, max=212480, per=8.09%, avg=115519.60, stdev=38268.25, samples=20 00:24:48.401 iops : min= 256, max= 830, avg=451.20, stdev=149.47, samples=20 00:24:48.401 lat (msec) : 2=0.11%, 4=0.39%, 10=1.55%, 20=2.71%, 50=4.83% 00:24:48.401 lat (msec) : 100=16.63%, 250=69.27%, 500=4.50% 00:24:48.401 cpu : usr=1.62%, sys=1.50%, ctx=2186, majf=0, minf=1 00:24:48.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.401 issued rwts: total=0,4575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.401 job10: (groupid=0, jobs=1): err= 0: pid=4113998: Wed May 15 01:53:11 2024 00:24:48.401 write: IOPS=540, BW=135MiB/s (142MB/s)(1367MiB/10115msec); 0 zone resets 00:24:48.401 slat (usec): min=20, max=109265, avg=1562.56, stdev=4272.71 00:24:48.401 clat (usec): min=1151, max=313062, avg=116745.84, stdev=69972.49 00:24:48.401 lat (usec): min=1221, max=316990, avg=118308.40, stdev=70913.62 00:24:48.401 clat percentiles (msec): 00:24:48.401 | 1.00th=[ 5], 5.00th=[ 25], 10.00th=[ 46], 20.00th=[ 50], 00:24:48.401 | 30.00th=[ 79], 40.00th=[ 86], 50.00th=[ 93], 60.00th=[ 118], 00:24:48.401 | 70.00th=[ 148], 80.00th=[ 184], 90.00th=[ 226], 95.00th=[ 255], 00:24:48.401 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 313], 00:24:48.401 | 99.99th=[ 313] 00:24:48.401 bw ( KiB/s): min=57344, max=306688, per=9.69%, avg=138404.50, stdev=67907.37, samples=20 00:24:48.401 iops : min= 224, max= 1198, avg=540.60, stdev=265.28, samples=20 00:24:48.401 lat (msec) : 2=0.22%, 4=0.75%, 10=0.91%, 20=1.08%, 50=18.27% 00:24:48.401 lat (msec) : 100=32.16%, 250=40.88%, 500=5.72% 00:24:48.401 cpu : usr=1.69%, sys=1.78%, ctx=2290, majf=0, minf=1 00:24:48.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:48.401 issued rwts: total=0,5469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:48.401 00:24:48.401 Run status group 0 (all jobs): 00:24:48.401 WRITE: bw=1395MiB/s (1462MB/s), 108MiB/s-187MiB/s (113MB/s-196MB/s), io=13.9GiB (14.9GB), run=10037-10183msec 00:24:48.401 00:24:48.401 Disk stats (read/write): 00:24:48.401 nvme0n1: ios=49/8625, merge=0/0, ticks=203/1211199, in_queue=1211402, util=98.19% 00:24:48.401 nvme10n1: ios=43/8761, merge=0/0, ticks=196/1240235, in_queue=1240431, util=98.73% 00:24:48.401 nvme1n1: ios=22/10014, merge=0/0, ticks=98/1239885, in_queue=1239983, util=97.97% 00:24:48.401 nvme2n1: ios=0/8964, merge=0/0, ticks=0/1218441, in_queue=1218441, util=97.43% 00:24:48.401 nvme3n1: ios=42/9169, merge=0/0, ticks=1129/1245159, in_queue=1246288, util=100.00% 00:24:48.401 nvme4n1: ios=0/11332, merge=0/0, ticks=0/1240348, in_queue=1240348, util=97.95% 00:24:48.401 nvme5n1: ios=44/9732, merge=0/0, ticks=1653/1223054, in_queue=1224707, util=100.00% 00:24:48.401 nvme6n1: ios=0/14728, merge=0/0, ticks=0/1212525, in_queue=1212525, util=98.29% 00:24:48.401 nvme7n1: ios=39/10775, merge=0/0, ticks=518/1216993, in_queue=1217511, util=99.84% 00:24:48.401 nvme8n1: ios=0/8930, merge=0/0, ticks=0/1212120, in_queue=1212120, util=98.94% 00:24:48.401 nvme9n1: ios=42/10714, merge=0/0, ticks=2658/1195909, in_queue=1198567, util=100.00% 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:48.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK1 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.401 01:53:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:48.401 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK2 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.401 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:48.659 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK3 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.659 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:48.917 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK4 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.917 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:49.175 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK5 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.175 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.176 01:53:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:49.176 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK6 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.176 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:49.432 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK7 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:49.432 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK8 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.432 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:49.687 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK9 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:49.687 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.687 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK10 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:49.944 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK11 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.944 rmmod nvme_tcp 00:24:49.944 rmmod nvme_fabrics 00:24:49.944 rmmod nvme_keyring 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4108673 ']' 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4108673 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@947 -- # '[' -z 4108673 ']' 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # kill -0 4108673 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # uname 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:49.944 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4108673 00:24:49.945 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:49.945 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:49.945 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4108673' 00:24:49.945 killing process with pid 4108673 00:24:49.945 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # kill 4108673 00:24:49.945 [2024-05-15 01:53:13.796887] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:49.945 01:53:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@971 -- # wait 4108673 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.508 01:53:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.406 01:53:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.406 00:24:52.406 real 1m0.012s 00:24:52.406 user 3m18.689s 00:24:52.406 sys 0m24.758s 00:24:52.406 01:53:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:52.406 01:53:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.406 ************************************ 00:24:52.406 END TEST nvmf_multiconnection 00:24:52.406 ************************************ 00:24:52.406 01:53:16 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:52.664 01:53:16 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:52.664 01:53:16 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:52.664 01:53:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.664 ************************************ 00:24:52.664 START TEST nvmf_initiator_timeout 00:24:52.664 ************************************ 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:52.664 * Looking for test storage... 00:24:52.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.664 01:53:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:55.192 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:55.192 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:55.192 Found net devices under 0000:09:00.0: cvl_0_0 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:55.192 Found net devices under 0000:09:00.1: cvl_0_1 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:24:55.192 00:24:55.192 --- 10.0.0.2 ping statistics --- 00:24:55.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.192 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:24:55.192 00:24:55.192 --- 10.0.0.1 ping statistics --- 00:24:55.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.192 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4117600 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4117600 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@828 -- # '[' -z 4117600 ']' 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:55.192 01:53:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.192 [2024-05-15 01:53:19.012880] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:24:55.192 [2024-05-15 01:53:19.012976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.192 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.192 [2024-05-15 01:53:19.088711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.450 [2024-05-15 01:53:19.174272] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.450 [2024-05-15 01:53:19.174340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.450 [2024-05-15 01:53:19.174369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.450 [2024-05-15 01:53:19.174381] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.450 [2024-05-15 01:53:19.174392] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.450 [2024-05-15 01:53:19.174457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.450 [2024-05-15 01:53:19.174548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.450 [2024-05-15 01:53:19.174597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.450 [2024-05-15 01:53:19.174599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@861 -- # return 0 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 Malloc0 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 Delay0 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 [2024-05-15 01:53:19.354105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.450 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:55.707 [2024-05-15 01:53:19.382127] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:55.707 [2024-05-15 01:53:19.382458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.707 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.707 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:56.271 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:56.271 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local i=0 00:24:56.271 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.271 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:56.271 01:53:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # sleep 2 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # return 0 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4118025 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:58.170 01:53:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:58.170 [global] 00:24:58.170 thread=1 00:24:58.170 invalidate=1 00:24:58.170 rw=write 00:24:58.170 time_based=1 00:24:58.170 runtime=60 00:24:58.170 ioengine=libaio 00:24:58.170 direct=1 00:24:58.170 bs=4096 00:24:58.170 iodepth=1 00:24:58.170 norandommap=0 00:24:58.170 numjobs=1 00:24:58.170 00:24:58.170 verify_dump=1 00:24:58.170 verify_backlog=512 00:24:58.170 verify_state_save=0 00:24:58.170 do_verify=1 00:24:58.170 verify=crc32c-intel 00:24:58.170 [job0] 00:24:58.170 filename=/dev/nvme0n1 00:24:58.170 Could not set queue depth (nvme0n1) 00:24:58.476 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:58.476 fio-3.35 00:24:58.476 Starting 1 thread 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.750 true 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.750 true 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.750 true 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:01.750 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.751 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.751 true 00:25:01.751 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.751 01:53:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:04.273 01:53:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:04.273 01:53:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.273 01:53:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.273 true 00:25:04.273 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.273 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:04.273 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.273 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.273 true 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 true 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:04.274 true 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:04.274 01:53:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4118025 00:26:00.466 00:26:00.466 job0: (groupid=0, jobs=1): err= 0: pid=4118094: Wed May 15 01:54:22 2024 00:26:00.466 read: IOPS=31, BW=127KiB/s (130kB/s)(7616KiB/60008msec) 00:26:00.466 slat (usec): min=6, max=15497, avg=31.90, stdev=480.32 00:26:00.466 clat (usec): min=241, max=41263k, avg=31174.59, stdev=945571.83 00:26:00.466 lat (usec): min=250, max=41263k, avg=31206.49, stdev=945571.48 00:26:00.466 clat percentiles (usec): 00:26:00.466 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 285], 00:26:00.466 | 20.00th=[ 330], 30.00th=[ 347], 40.00th=[ 363], 00:26:00.466 | 50.00th=[ 379], 60.00th=[ 396], 70.00th=[ 408], 00:26:00.466 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:00.466 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 44827], 00:26:00.466 | 99.95th=[17112761], 99.99th=[17112761] 00:26:00.466 write: IOPS=34, BW=137KiB/s (140kB/s)(8192KiB/60008msec); 0 zone resets 00:26:00.466 slat (usec): min=7, max=28979, avg=32.75, stdev=640.01 00:26:00.466 clat (usec): min=185, max=431, avg=245.78, stdev=30.07 00:26:00.466 lat (usec): min=199, max=29289, avg=278.53, stdev=642.32 00:26:00.466 clat percentiles (usec): 00:26:00.466 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:26:00.466 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 247], 00:26:00.466 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 302], 00:26:00.466 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 416], 99.95th=[ 416], 00:26:00.466 | 99.99th=[ 433] 00:26:00.466 bw ( KiB/s): min= 4096, max= 8000, per=100.00%, avg=5461.33, stdev=2200.64, samples=3 00:26:00.466 iops : min= 1024, max= 2000, avg=1365.33, stdev=550.16, samples=3 00:26:00.466 lat (usec) : 250=33.48%, 500=55.52%, 750=0.03%, 1000=0.10% 00:26:00.466 lat (msec) : 2=0.05%, 50=10.80%, >=2000=0.03% 00:26:00.466 cpu : usr=0.09%, sys=0.15%, ctx=3959, majf=0, minf=2 00:26:00.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.466 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:00.466 00:26:00.466 Run status group 0 (all jobs): 00:26:00.466 READ: bw=127KiB/s (130kB/s), 127KiB/s-127KiB/s (130kB/s-130kB/s), io=7616KiB (7799kB), run=60008-60008msec 00:26:00.466 WRITE: bw=137KiB/s (140kB/s), 137KiB/s-137KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60008-60008msec 00:26:00.466 00:26:00.466 Disk stats (read/write): 00:26:00.466 nvme0n1: ios=1953/2048, merge=0/0, ticks=19455/473, in_queue=19928, util=99.84% 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:00.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # local i=0 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1228 -- # return 0 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:00.466 nvmf hotplug test: fio successful as expected 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:00.466 rmmod nvme_tcp 00:26:00.466 rmmod nvme_fabrics 00:26:00.466 rmmod nvme_keyring 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4117600 ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4117600 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@947 -- # '[' -z 4117600 ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # kill -0 4117600 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # uname 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4117600 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4117600' 00:26:00.466 killing process with pid 4117600 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # kill 4117600 00:26:00.466 [2024-05-15 01:54:22.518668] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # wait 4117600 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.466 01:54:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.034 01:54:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:01.034 00:26:01.034 real 1m8.444s 00:26:01.034 user 4m9.190s 00:26:01.034 sys 0m8.055s 00:26:01.034 01:54:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:01.034 01:54:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.034 ************************************ 00:26:01.034 END TEST nvmf_initiator_timeout 00:26:01.034 ************************************ 00:26:01.034 01:54:24 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:01.034 01:54:24 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:01.034 01:54:24 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:01.034 01:54:24 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.034 01:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:03.562 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:03.562 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:03.562 01:54:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:03.563 Found net devices under 0000:09:00.0: cvl_0_0 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:03.563 Found net devices under 0000:09:00.1: cvl_0_1 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:03.563 01:54:27 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:03.563 01:54:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:03.563 01:54:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:03.563 01:54:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 ************************************ 00:26:03.563 START TEST nvmf_perf_adq 00:26:03.563 ************************************ 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:03.563 * Looking for test storage... 00:26:03.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.563 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.821 01:54:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:03.822 01:54:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:06.353 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:06.353 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.353 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:06.354 Found net devices under 0000:09:00.0: cvl_0_0 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:06.354 Found net devices under 0000:09:00.1: cvl_0_1 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:06.354 01:54:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:06.612 01:54:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:08.509 01:54:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:13.856 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:13.856 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:13.856 Found net devices under 0000:09:00.0: cvl_0_0 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.856 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:13.857 Found net devices under 0000:09:00.1: cvl_0_1 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.857 01:54:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:26:13.857 00:26:13.857 --- 10.0.0.2 ping statistics --- 00:26:13.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.857 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:26:13.857 00:26:13.857 --- 10.0.0.1 ping statistics --- 00:26:13.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.857 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4130811 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4130811 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 4130811 ']' 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 [2024-05-15 01:54:37.139855] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:13.857 [2024-05-15 01:54:37.139935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.857 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.857 [2024-05-15 01:54:37.216772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.857 [2024-05-15 01:54:37.302944] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.857 [2024-05-15 01:54:37.303018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.857 [2024-05-15 01:54:37.303033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.857 [2024-05-15 01:54:37.303058] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.857 [2024-05-15 01:54:37.303069] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.857 [2024-05-15 01:54:37.303151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.857 [2024-05-15 01:54:37.303226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.857 [2024-05-15 01:54:37.303265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.857 [2024-05-15 01:54:37.303268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 [2024-05-15 01:54:37.519724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 Malloc1 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.857 [2024-05-15 01:54:37.571274] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:13.857 [2024-05-15 01:54:37.571565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.857 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4130955 00:26:13.858 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:13.858 01:54:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:13.858 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:15.756 "tick_rate": 2700000000, 00:26:15.756 "poll_groups": [ 00:26:15.756 { 00:26:15.756 "name": "nvmf_tgt_poll_group_000", 00:26:15.756 "admin_qpairs": 1, 00:26:15.756 "io_qpairs": 1, 00:26:15.756 "current_admin_qpairs": 1, 00:26:15.756 "current_io_qpairs": 1, 00:26:15.756 "pending_bdev_io": 0, 00:26:15.756 "completed_nvme_io": 20264, 00:26:15.756 "transports": [ 00:26:15.756 { 00:26:15.756 "trtype": "TCP" 00:26:15.756 } 00:26:15.756 ] 00:26:15.756 }, 00:26:15.756 { 00:26:15.756 "name": "nvmf_tgt_poll_group_001", 00:26:15.756 "admin_qpairs": 0, 00:26:15.756 "io_qpairs": 1, 00:26:15.756 "current_admin_qpairs": 0, 00:26:15.756 "current_io_qpairs": 1, 00:26:15.756 "pending_bdev_io": 0, 00:26:15.756 "completed_nvme_io": 20660, 00:26:15.756 "transports": [ 00:26:15.756 { 00:26:15.756 "trtype": "TCP" 00:26:15.756 } 00:26:15.756 ] 00:26:15.756 }, 00:26:15.756 { 00:26:15.756 "name": "nvmf_tgt_poll_group_002", 00:26:15.756 "admin_qpairs": 0, 00:26:15.756 "io_qpairs": 1, 00:26:15.756 "current_admin_qpairs": 0, 00:26:15.756 "current_io_qpairs": 1, 00:26:15.756 "pending_bdev_io": 0, 00:26:15.756 "completed_nvme_io": 21266, 00:26:15.756 "transports": [ 00:26:15.756 { 00:26:15.756 "trtype": "TCP" 00:26:15.756 } 00:26:15.756 ] 00:26:15.756 }, 00:26:15.756 { 00:26:15.756 "name": "nvmf_tgt_poll_group_003", 00:26:15.756 "admin_qpairs": 0, 00:26:15.756 "io_qpairs": 1, 00:26:15.756 "current_admin_qpairs": 0, 00:26:15.756 "current_io_qpairs": 1, 00:26:15.756 "pending_bdev_io": 0, 00:26:15.756 "completed_nvme_io": 19986, 00:26:15.756 "transports": [ 00:26:15.756 { 00:26:15.756 "trtype": "TCP" 00:26:15.756 } 00:26:15.756 ] 00:26:15.756 } 00:26:15.756 ] 00:26:15.756 }' 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:15.756 01:54:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4130955 00:26:23.856 Initializing NVMe Controllers 00:26:23.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:23.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:23.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:23.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:23.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:23.856 Initialization complete. Launching workers. 00:26:23.856 ======================================================== 00:26:23.857 Latency(us) 00:26:23.857 Device Information : IOPS MiB/s Average min max 00:26:23.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10270.29 40.12 6232.36 2595.45 9225.89 00:26:23.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10613.29 41.46 6030.79 3398.82 7832.96 00:26:23.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10940.99 42.74 5849.74 3767.35 7443.63 00:26:23.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10432.29 40.75 6134.47 2530.24 10092.77 00:26:23.857 ======================================================== 00:26:23.857 Total : 42256.88 165.07 6058.50 2530.24 10092.77 00:26:23.857 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.857 rmmod nvme_tcp 00:26:23.857 rmmod nvme_fabrics 00:26:23.857 rmmod nvme_keyring 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4130811 ']' 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4130811 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 4130811 ']' 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 4130811 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4130811 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4130811' 00:26:23.857 killing process with pid 4130811 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 4130811 00:26:23.857 [2024-05-15 01:54:47.740257] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:23.857 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 4130811 00:26:24.114 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.114 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.114 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.114 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.114 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.115 01:54:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.115 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.115 01:54:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.645 01:54:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.645 01:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:26.645 01:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:26.903 01:54:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:28.275 01:54:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.546 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:33.547 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:33.547 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:33.547 Found net devices under 0000:09:00.0: cvl_0_0 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:33.547 Found net devices under 0000:09:00.1: cvl_0_1 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.547 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:33.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:26:33.548 00:26:33.548 --- 10.0.0.2 ping statistics --- 00:26:33.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.548 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:26:33.548 00:26:33.548 --- 10.0.0.1 ping statistics --- 00:26:33.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.548 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:33.548 net.core.busy_poll = 1 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:33.548 net.core.busy_read = 1 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4133444 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4133444 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 4133444 ']' 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:33.548 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.806 [2024-05-15 01:54:57.484674] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:33.806 [2024-05-15 01:54:57.484746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.806 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.806 [2024-05-15 01:54:57.556702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.806 [2024-05-15 01:54:57.637943] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.806 [2024-05-15 01:54:57.638006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.806 [2024-05-15 01:54:57.638035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.806 [2024-05-15 01:54:57.638046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.806 [2024-05-15 01:54:57.638057] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.806 [2024-05-15 01:54:57.638153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.806 [2024-05-15 01:54:57.638183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.806 [2024-05-15 01:54:57.638244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.806 [2024-05-15 01:54:57.638247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.806 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.064 [2024-05-15 01:54:57.863048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.064 Malloc1 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.064 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:34.065 [2024-05-15 01:54:57.915998] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:34.065 [2024-05-15 01:54:57.916326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4133480 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:34.065 01:54:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:34.065 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:36.591 "tick_rate": 2700000000, 00:26:36.591 "poll_groups": [ 00:26:36.591 { 00:26:36.591 "name": "nvmf_tgt_poll_group_000", 00:26:36.591 "admin_qpairs": 1, 00:26:36.591 "io_qpairs": 4, 00:26:36.591 "current_admin_qpairs": 1, 00:26:36.591 "current_io_qpairs": 4, 00:26:36.591 "pending_bdev_io": 0, 00:26:36.591 "completed_nvme_io": 33863, 00:26:36.591 "transports": [ 00:26:36.591 { 00:26:36.591 "trtype": "TCP" 00:26:36.591 } 00:26:36.591 ] 00:26:36.591 }, 00:26:36.591 { 00:26:36.591 "name": "nvmf_tgt_poll_group_001", 00:26:36.591 "admin_qpairs": 0, 00:26:36.591 "io_qpairs": 0, 00:26:36.591 "current_admin_qpairs": 0, 00:26:36.591 "current_io_qpairs": 0, 00:26:36.591 "pending_bdev_io": 0, 00:26:36.591 "completed_nvme_io": 0, 00:26:36.591 "transports": [ 00:26:36.591 { 00:26:36.591 "trtype": "TCP" 00:26:36.591 } 00:26:36.591 ] 00:26:36.591 }, 00:26:36.591 { 00:26:36.591 "name": "nvmf_tgt_poll_group_002", 00:26:36.591 "admin_qpairs": 0, 00:26:36.591 "io_qpairs": 0, 00:26:36.591 "current_admin_qpairs": 0, 00:26:36.591 "current_io_qpairs": 0, 00:26:36.591 "pending_bdev_io": 0, 00:26:36.591 "completed_nvme_io": 0, 00:26:36.591 "transports": [ 00:26:36.591 { 00:26:36.591 "trtype": "TCP" 00:26:36.591 } 00:26:36.591 ] 00:26:36.591 }, 00:26:36.591 { 00:26:36.591 "name": "nvmf_tgt_poll_group_003", 00:26:36.591 "admin_qpairs": 0, 00:26:36.591 "io_qpairs": 0, 00:26:36.591 "current_admin_qpairs": 0, 00:26:36.591 "current_io_qpairs": 0, 00:26:36.591 "pending_bdev_io": 0, 00:26:36.591 "completed_nvme_io": 0, 00:26:36.591 "transports": [ 00:26:36.591 { 00:26:36.591 "trtype": "TCP" 00:26:36.591 } 00:26:36.591 ] 00:26:36.591 } 00:26:36.591 ] 00:26:36.591 }' 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:26:36.591 01:54:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4133480 00:26:44.694 Initializing NVMe Controllers 00:26:44.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:44.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:44.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:44.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:44.694 Initialization complete. Launching workers. 00:26:44.694 ======================================================== 00:26:44.694 Latency(us) 00:26:44.694 Device Information : IOPS MiB/s Average min max 00:26:44.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4752.70 18.57 13512.25 1653.26 63218.37 00:26:44.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4466.70 17.45 14333.17 1853.05 61890.63 00:26:44.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3997.70 15.62 16010.04 1883.43 63157.42 00:26:44.694 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4581.40 17.90 13974.69 2528.40 61853.50 00:26:44.694 ======================================================== 00:26:44.694 Total : 17798.50 69.53 14398.32 1653.26 63218.37 00:26:44.694 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:44.694 rmmod nvme_tcp 00:26:44.694 rmmod nvme_fabrics 00:26:44.694 rmmod nvme_keyring 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4133444 ']' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4133444 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 4133444 ']' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 4133444 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4133444 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4133444' 00:26:44.694 killing process with pid 4133444 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 4133444 00:26:44.694 [2024-05-15 01:55:08.185341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 4133444 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.694 01:55:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.019 01:55:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.019 01:55:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:48.019 00:26:48.019 real 0m44.053s 00:26:48.019 user 2m39.465s 00:26:48.019 sys 0m9.213s 00:26:48.019 01:55:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:48.019 01:55:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.019 ************************************ 00:26:48.019 END TEST nvmf_perf_adq 00:26:48.019 ************************************ 00:26:48.019 01:55:11 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:48.019 01:55:11 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:48.019 01:55:11 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:48.019 01:55:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.019 ************************************ 00:26:48.019 START TEST nvmf_shutdown 00:26:48.019 ************************************ 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:48.019 * Looking for test storage... 00:26:48.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.019 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:48.020 ************************************ 00:26:48.020 START TEST nvmf_shutdown_tc1 00:26:48.020 ************************************ 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.020 01:55:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:50.550 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:50.550 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:50.550 Found net devices under 0000:09:00.0: cvl_0_0 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:50.550 Found net devices under 0000:09:00.1: cvl_0_1 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.550 01:55:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:26:50.550 00:26:50.550 --- 10.0.0.2 ping statistics --- 00:26:50.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.550 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:26:50.550 00:26:50.550 --- 10.0.0.1 ping statistics --- 00:26:50.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.550 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.550 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4137162 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4137162 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 4137162 ']' 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.551 [2024-05-15 01:55:14.187443] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:50.551 [2024-05-15 01:55:14.187535] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.551 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.551 [2024-05-15 01:55:14.262811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.551 [2024-05-15 01:55:14.347177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.551 [2024-05-15 01:55:14.347265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.551 [2024-05-15 01:55:14.347290] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.551 [2024-05-15 01:55:14.347301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.551 [2024-05-15 01:55:14.347311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.551 [2024-05-15 01:55:14.347449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.551 [2024-05-15 01:55:14.347500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.551 [2024-05-15 01:55:14.347548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:50.551 [2024-05-15 01:55:14.347554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:50.551 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.810 [2024-05-15 01:55:14.489792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.810 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.810 Malloc1 00:26:50.810 [2024-05-15 01:55:14.568869] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:50.810 [2024-05-15 01:55:14.569172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.810 Malloc2 00:26:50.810 Malloc3 00:26:50.810 Malloc4 00:26:50.810 Malloc5 00:26:51.068 Malloc6 00:26:51.068 Malloc7 00:26:51.068 Malloc8 00:26:51.068 Malloc9 00:26:51.068 Malloc10 00:26:51.068 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.068 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:51.068 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:51.068 01:55:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4137229 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4137229 /var/tmp/bdevperf.sock 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 4137229 ']' 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:51.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.327 { 00:26:51.327 "params": { 00:26:51.327 "name": "Nvme$subsystem", 00:26:51.327 "trtype": "$TEST_TRANSPORT", 00:26:51.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.327 "adrfam": "ipv4", 00:26:51.327 "trsvcid": "$NVMF_PORT", 00:26:51.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.327 "hdgst": ${hdgst:-false}, 00:26:51.327 "ddgst": ${ddgst:-false} 00:26:51.327 }, 00:26:51.327 "method": "bdev_nvme_attach_controller" 00:26:51.327 } 00:26:51.327 EOF 00:26:51.327 )") 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.327 { 00:26:51.327 "params": { 00:26:51.327 "name": "Nvme$subsystem", 00:26:51.327 "trtype": "$TEST_TRANSPORT", 00:26:51.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.327 "adrfam": "ipv4", 00:26:51.327 "trsvcid": "$NVMF_PORT", 00:26:51.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.327 "hdgst": ${hdgst:-false}, 00:26:51.327 "ddgst": ${ddgst:-false} 00:26:51.327 }, 00:26:51.327 "method": "bdev_nvme_attach_controller" 00:26:51.327 } 00:26:51.327 EOF 00:26:51.327 )") 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.327 { 00:26:51.327 "params": { 00:26:51.327 "name": "Nvme$subsystem", 00:26:51.327 "trtype": "$TEST_TRANSPORT", 00:26:51.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.327 "adrfam": "ipv4", 00:26:51.327 "trsvcid": "$NVMF_PORT", 00:26:51.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.327 "hdgst": ${hdgst:-false}, 00:26:51.327 "ddgst": ${ddgst:-false} 00:26:51.327 }, 00:26:51.327 "method": "bdev_nvme_attach_controller" 00:26:51.327 } 00:26:51.327 EOF 00:26:51.327 )") 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.327 { 00:26:51.327 "params": { 00:26:51.327 "name": "Nvme$subsystem", 00:26:51.327 "trtype": "$TEST_TRANSPORT", 00:26:51.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.327 "adrfam": "ipv4", 00:26:51.327 "trsvcid": "$NVMF_PORT", 00:26:51.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.327 "hdgst": ${hdgst:-false}, 00:26:51.327 "ddgst": ${ddgst:-false} 00:26:51.327 }, 00:26:51.327 "method": "bdev_nvme_attach_controller" 00:26:51.327 } 00:26:51.327 EOF 00:26:51.327 )") 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.327 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.328 { 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme$subsystem", 00:26:51.328 "trtype": "$TEST_TRANSPORT", 00:26:51.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "$NVMF_PORT", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.328 "hdgst": ${hdgst:-false}, 00:26:51.328 "ddgst": ${ddgst:-false} 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 } 00:26:51.328 EOF 00:26:51.328 )") 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.328 { 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme$subsystem", 00:26:51.328 "trtype": "$TEST_TRANSPORT", 00:26:51.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "$NVMF_PORT", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.328 "hdgst": ${hdgst:-false}, 00:26:51.328 "ddgst": ${ddgst:-false} 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 } 00:26:51.328 EOF 00:26:51.328 )") 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.328 { 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme$subsystem", 00:26:51.328 "trtype": "$TEST_TRANSPORT", 00:26:51.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "$NVMF_PORT", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.328 "hdgst": ${hdgst:-false}, 00:26:51.328 "ddgst": ${ddgst:-false} 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 } 00:26:51.328 EOF 00:26:51.328 )") 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.328 { 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme$subsystem", 00:26:51.328 "trtype": "$TEST_TRANSPORT", 00:26:51.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "$NVMF_PORT", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.328 "hdgst": ${hdgst:-false}, 00:26:51.328 "ddgst": ${ddgst:-false} 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 } 00:26:51.328 EOF 00:26:51.328 )") 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.328 { 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme$subsystem", 00:26:51.328 "trtype": "$TEST_TRANSPORT", 00:26:51.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "$NVMF_PORT", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.328 "hdgst": ${hdgst:-false}, 00:26:51.328 "ddgst": ${ddgst:-false} 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 } 00:26:51.328 EOF 00:26:51.328 )") 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:51.328 { 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme$subsystem", 00:26:51.328 "trtype": "$TEST_TRANSPORT", 00:26:51.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "$NVMF_PORT", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.328 "hdgst": ${hdgst:-false}, 00:26:51.328 "ddgst": ${ddgst:-false} 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 } 00:26:51.328 EOF 00:26:51.328 )") 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:51.328 01:55:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme1", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme2", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme3", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme4", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme5", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme6", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme7", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme8", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme9", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 },{ 00:26:51.328 "params": { 00:26:51.328 "name": "Nvme10", 00:26:51.328 "trtype": "tcp", 00:26:51.328 "traddr": "10.0.0.2", 00:26:51.328 "adrfam": "ipv4", 00:26:51.328 "trsvcid": "4420", 00:26:51.328 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:51.328 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:51.328 "hdgst": false, 00:26:51.328 "ddgst": false 00:26:51.328 }, 00:26:51.328 "method": "bdev_nvme_attach_controller" 00:26:51.328 }' 00:26:51.329 [2024-05-15 01:55:15.051393] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:51.329 [2024-05-15 01:55:15.051468] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:51.329 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.329 [2024-05-15 01:55:15.128416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.329 [2024-05-15 01:55:15.211731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4137229 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:53.225 01:55:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:54.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4137229 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4137162 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.157 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.157 { 00:26:54.157 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:54.158 { 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme$subsystem", 00:26:54.158 "trtype": "$TEST_TRANSPORT", 00:26:54.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "$NVMF_PORT", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:54.158 "hdgst": ${hdgst:-false}, 00:26:54.158 "ddgst": ${ddgst:-false} 00:26:54.158 }, 00:26:54.158 "method": "bdev_nvme_attach_controller" 00:26:54.158 } 00:26:54.158 EOF 00:26:54.158 )") 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:54.158 01:55:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:54.158 "params": { 00:26:54.158 "name": "Nvme1", 00:26:54.158 "trtype": "tcp", 00:26:54.158 "traddr": "10.0.0.2", 00:26:54.158 "adrfam": "ipv4", 00:26:54.158 "trsvcid": "4420", 00:26:54.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:54.158 "hdgst": false, 00:26:54.158 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme2", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme3", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme4", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme5", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme6", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme7", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme8", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme9", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 },{ 00:26:54.159 "params": { 00:26:54.159 "name": "Nvme10", 00:26:54.159 "trtype": "tcp", 00:26:54.159 "traddr": "10.0.0.2", 00:26:54.159 "adrfam": "ipv4", 00:26:54.159 "trsvcid": "4420", 00:26:54.159 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:54.159 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:54.159 "hdgst": false, 00:26:54.159 "ddgst": false 00:26:54.159 }, 00:26:54.159 "method": "bdev_nvme_attach_controller" 00:26:54.159 }' 00:26:54.159 [2024-05-15 01:55:18.069648] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:26:54.159 [2024-05-15 01:55:18.069735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137647 ] 00:26:54.417 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.417 [2024-05-15 01:55:18.145150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.417 [2024-05-15 01:55:18.227378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.786 Running I/O for 1 seconds... 00:26:57.158 00:26:57.158 Latency(us) 00:26:57.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.158 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme1n1 : 1.09 235.34 14.71 0.00 0.00 268754.87 18544.26 256318.58 00:26:57.158 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme2n1 : 1.12 228.05 14.25 0.00 0.00 273243.97 21651.15 256318.58 00:26:57.158 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme3n1 : 1.07 242.81 15.18 0.00 0.00 250404.65 7573.05 245444.46 00:26:57.158 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme4n1 : 1.07 242.59 15.16 0.00 0.00 245923.37 8252.68 250104.79 00:26:57.158 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme5n1 : 1.15 222.56 13.91 0.00 0.00 266415.41 20583.16 256318.58 00:26:57.158 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme6n1 : 1.11 230.13 14.38 0.00 0.00 252422.07 19709.35 253211.69 00:26:57.158 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme7n1 : 1.13 227.47 14.22 0.00 0.00 251194.79 21748.24 239230.67 00:26:57.158 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme8n1 : 1.17 278.06 17.38 0.00 0.00 202584.73 1444.22 231463.44 00:26:57.158 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme9n1 : 1.16 220.60 13.79 0.00 0.00 251003.83 19029.71 284280.60 00:26:57.158 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:57.158 Verification LBA range: start 0x0 length 0x400 00:26:57.158 Nvme10n1 : 1.18 278.98 17.44 0.00 0.00 195160.38 879.88 260978.92 00:26:57.158 =================================================================================================================== 00:26:57.158 Total : 2406.59 150.41 0.00 0.00 243276.07 879.88 284280.60 00:26:57.158 01:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:26:57.158 01:55:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:57.158 rmmod nvme_tcp 00:26:57.158 rmmod nvme_fabrics 00:26:57.158 rmmod nvme_keyring 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4137162 ']' 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4137162 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 4137162 ']' 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 4137162 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4137162 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4137162' 00:26:57.158 killing process with pid 4137162 00:26:57.158 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 4137162 00:26:57.159 [2024-05-15 01:55:21.079061] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:57.159 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 4137162 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.725 01:55:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:00.251 00:27:00.251 real 0m11.999s 00:27:00.251 user 0m33.699s 00:27:00.251 sys 0m3.377s 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:00.251 ************************************ 00:27:00.251 END TEST nvmf_shutdown_tc1 00:27:00.251 ************************************ 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:00.251 ************************************ 00:27:00.251 START TEST nvmf_shutdown_tc2 00:27:00.251 ************************************ 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:00.251 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:00.252 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:00.252 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:00.252 Found net devices under 0000:09:00.0: cvl_0_0 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:00.252 Found net devices under 0000:09:00.1: cvl_0_1 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:00.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:27:00.252 00:27:00.252 --- 10.0.0.2 ping statistics --- 00:27:00.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.252 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:00.252 00:27:00.252 --- 10.0.0.1 ping statistics --- 00:27:00.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.252 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4138417 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4138417 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 4138417 ']' 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.252 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:00.253 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.253 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:00.253 01:55:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.253 [2024-05-15 01:55:23.917077] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:00.253 [2024-05-15 01:55:23.917167] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.253 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.253 [2024-05-15 01:55:23.991266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.253 [2024-05-15 01:55:24.072182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.253 [2024-05-15 01:55:24.072260] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.253 [2024-05-15 01:55:24.072290] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.253 [2024-05-15 01:55:24.072302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.253 [2024-05-15 01:55:24.072312] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.253 [2024-05-15 01:55:24.072394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.253 [2024-05-15 01:55:24.072457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.253 [2024-05-15 01:55:24.072502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:00.253 [2024-05-15 01:55:24.072504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.511 [2024-05-15 01:55:24.210850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.511 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:00.511 Malloc1 00:27:00.511 [2024-05-15 01:55:24.285969] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:00.511 [2024-05-15 01:55:24.286298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.511 Malloc2 00:27:00.511 Malloc3 00:27:00.511 Malloc4 00:27:00.768 Malloc5 00:27:00.768 Malloc6 00:27:00.768 Malloc7 00:27:00.768 Malloc8 00:27:00.768 Malloc9 00:27:00.768 Malloc10 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4138594 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4138594 /var/tmp/bdevperf.sock 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 4138594 ']' 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:01.027 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:01.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.028 { 00:27:01.028 "params": { 00:27:01.028 "name": "Nvme$subsystem", 00:27:01.028 "trtype": "$TEST_TRANSPORT", 00:27:01.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.028 "adrfam": "ipv4", 00:27:01.028 "trsvcid": "$NVMF_PORT", 00:27:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.028 "hdgst": ${hdgst:-false}, 00:27:01.028 "ddgst": ${ddgst:-false} 00:27:01.028 }, 00:27:01.028 "method": "bdev_nvme_attach_controller" 00:27:01.028 } 00:27:01.028 EOF 00:27:01.028 )") 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.028 { 00:27:01.028 "params": { 00:27:01.028 "name": "Nvme$subsystem", 00:27:01.028 "trtype": "$TEST_TRANSPORT", 00:27:01.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.028 "adrfam": "ipv4", 00:27:01.028 "trsvcid": "$NVMF_PORT", 00:27:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.028 "hdgst": ${hdgst:-false}, 00:27:01.028 "ddgst": ${ddgst:-false} 00:27:01.028 }, 00:27:01.028 "method": "bdev_nvme_attach_controller" 00:27:01.028 } 00:27:01.028 EOF 00:27:01.028 )") 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.028 { 00:27:01.028 "params": { 00:27:01.028 "name": "Nvme$subsystem", 00:27:01.028 "trtype": "$TEST_TRANSPORT", 00:27:01.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.028 "adrfam": "ipv4", 00:27:01.028 "trsvcid": "$NVMF_PORT", 00:27:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.028 "hdgst": ${hdgst:-false}, 00:27:01.028 "ddgst": ${ddgst:-false} 00:27:01.028 }, 00:27:01.028 "method": "bdev_nvme_attach_controller" 00:27:01.028 } 00:27:01.028 EOF 00:27:01.028 )") 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.028 { 00:27:01.028 "params": { 00:27:01.028 "name": "Nvme$subsystem", 00:27:01.028 "trtype": "$TEST_TRANSPORT", 00:27:01.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.028 "adrfam": "ipv4", 00:27:01.028 "trsvcid": "$NVMF_PORT", 00:27:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.028 "hdgst": ${hdgst:-false}, 00:27:01.028 "ddgst": ${ddgst:-false} 00:27:01.028 }, 00:27:01.028 "method": "bdev_nvme_attach_controller" 00:27:01.028 } 00:27:01.028 EOF 00:27:01.028 )") 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.028 { 00:27:01.028 "params": { 00:27:01.028 "name": "Nvme$subsystem", 00:27:01.028 "trtype": "$TEST_TRANSPORT", 00:27:01.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.028 "adrfam": "ipv4", 00:27:01.028 "trsvcid": "$NVMF_PORT", 00:27:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.028 "hdgst": ${hdgst:-false}, 00:27:01.028 "ddgst": ${ddgst:-false} 00:27:01.028 }, 00:27:01.028 "method": "bdev_nvme_attach_controller" 00:27:01.028 } 00:27:01.028 EOF 00:27:01.028 )") 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.028 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.028 { 00:27:01.028 "params": { 00:27:01.028 "name": "Nvme$subsystem", 00:27:01.028 "trtype": "$TEST_TRANSPORT", 00:27:01.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.028 "adrfam": "ipv4", 00:27:01.028 "trsvcid": "$NVMF_PORT", 00:27:01.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.028 "hdgst": ${hdgst:-false}, 00:27:01.028 "ddgst": ${ddgst:-false} 00:27:01.028 }, 00:27:01.028 "method": "bdev_nvme_attach_controller" 00:27:01.028 } 00:27:01.028 EOF 00:27:01.028 )") 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.029 { 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme$subsystem", 00:27:01.029 "trtype": "$TEST_TRANSPORT", 00:27:01.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "$NVMF_PORT", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.029 "hdgst": ${hdgst:-false}, 00:27:01.029 "ddgst": ${ddgst:-false} 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 } 00:27:01.029 EOF 00:27:01.029 )") 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.029 { 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme$subsystem", 00:27:01.029 "trtype": "$TEST_TRANSPORT", 00:27:01.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "$NVMF_PORT", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.029 "hdgst": ${hdgst:-false}, 00:27:01.029 "ddgst": ${ddgst:-false} 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 } 00:27:01.029 EOF 00:27:01.029 )") 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.029 { 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme$subsystem", 00:27:01.029 "trtype": "$TEST_TRANSPORT", 00:27:01.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "$NVMF_PORT", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.029 "hdgst": ${hdgst:-false}, 00:27:01.029 "ddgst": ${ddgst:-false} 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 } 00:27:01.029 EOF 00:27:01.029 )") 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:01.029 { 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme$subsystem", 00:27:01.029 "trtype": "$TEST_TRANSPORT", 00:27:01.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "$NVMF_PORT", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.029 "hdgst": ${hdgst:-false}, 00:27:01.029 "ddgst": ${ddgst:-false} 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 } 00:27:01.029 EOF 00:27:01.029 )") 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:01.029 01:55:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme1", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme2", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme3", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme4", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme5", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme6", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme7", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme8", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme9", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:01.029 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:01.029 "hdgst": false, 00:27:01.029 "ddgst": false 00:27:01.029 }, 00:27:01.029 "method": "bdev_nvme_attach_controller" 00:27:01.029 },{ 00:27:01.029 "params": { 00:27:01.029 "name": "Nvme10", 00:27:01.029 "trtype": "tcp", 00:27:01.029 "traddr": "10.0.0.2", 00:27:01.029 "adrfam": "ipv4", 00:27:01.029 "trsvcid": "4420", 00:27:01.029 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:01.030 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:01.030 "hdgst": false, 00:27:01.030 "ddgst": false 00:27:01.030 }, 00:27:01.030 "method": "bdev_nvme_attach_controller" 00:27:01.030 }' 00:27:01.030 [2024-05-15 01:55:24.782586] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:01.030 [2024-05-15 01:55:24.782656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4138594 ] 00:27:01.030 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.030 [2024-05-15 01:55:24.854131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.030 [2024-05-15 01:55:24.935883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.926 Running I/O for 10 seconds... 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:02.926 01:55:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4138594 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 4138594 ']' 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 4138594 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:03.184 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4138594 00:27:03.442 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:03.442 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:03.442 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4138594' 00:27:03.442 killing process with pid 4138594 00:27:03.442 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 4138594 00:27:03.442 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 4138594 00:27:03.442 Received shutdown signal, test time was about 0.792762 seconds 00:27:03.442 00:27:03.442 Latency(us) 00:27:03.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.442 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme1n1 : 0.78 254.19 15.89 0.00 0.00 244819.41 8980.86 212822.09 00:27:03.442 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme2n1 : 0.74 173.47 10.84 0.00 0.00 354501.59 21359.88 271853.04 00:27:03.442 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme3n1 : 0.77 249.49 15.59 0.00 0.00 240695.62 32816.55 239230.67 00:27:03.442 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme4n1 : 0.77 248.69 15.54 0.00 0.00 235324.81 20291.89 234570.33 00:27:03.442 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme5n1 : 0.79 242.44 15.15 0.00 0.00 235753.88 21456.97 254765.13 00:27:03.442 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme6n1 : 0.78 245.31 15.33 0.00 0.00 226467.08 19418.07 267192.70 00:27:03.442 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme7n1 : 0.78 250.12 15.63 0.00 0.00 214912.38 4223.43 271853.04 00:27:03.442 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme8n1 : 0.79 243.75 15.23 0.00 0.00 214838.99 16117.00 243891.01 00:27:03.442 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme9n1 : 0.76 172.83 10.80 0.00 0.00 288764.87 3228.25 270299.59 00:27:03.442 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:03.442 Verification LBA range: start 0x0 length 0x400 00:27:03.442 Nvme10n1 : 0.76 168.59 10.54 0.00 0.00 292111.55 21845.33 292047.83 00:27:03.442 =================================================================================================================== 00:27:03.442 Total : 2248.88 140.56 0.00 0.00 248493.47 3228.25 292047.83 00:27:03.700 01:55:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4138417 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.632 rmmod nvme_tcp 00:27:04.632 rmmod nvme_fabrics 00:27:04.632 rmmod nvme_keyring 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4138417 ']' 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4138417 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 4138417 ']' 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 4138417 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4138417 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4138417' 00:27:04.632 killing process with pid 4138417 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 4138417 00:27:04.632 [2024-05-15 01:55:28.562564] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:04.632 01:55:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 4138417 00:27:05.197 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.198 01:55:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:07.729 00:27:07.729 real 0m7.416s 00:27:07.729 user 0m22.055s 00:27:07.729 sys 0m1.363s 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.729 ************************************ 00:27:07.729 END TEST nvmf_shutdown_tc2 00:27:07.729 ************************************ 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:07.729 ************************************ 00:27:07.729 START TEST nvmf_shutdown_tc3 00:27:07.729 ************************************ 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.729 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:07.730 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:07.730 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:07.730 Found net devices under 0000:09:00.0: cvl_0_0 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:07.730 Found net devices under 0000:09:00.1: cvl_0_1 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:07.730 00:27:07.730 --- 10.0.0.2 ping statistics --- 00:27:07.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.730 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:27:07.730 00:27:07.730 --- 10.0.0.1 ping statistics --- 00:27:07.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.730 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.730 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4139493 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4139493 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 4139493 ']' 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.731 [2024-05-15 01:55:31.367076] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:07.731 [2024-05-15 01:55:31.367170] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.731 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.731 [2024-05-15 01:55:31.444900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.731 [2024-05-15 01:55:31.533040] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.731 [2024-05-15 01:55:31.533106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.731 [2024-05-15 01:55:31.533135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.731 [2024-05-15 01:55:31.533147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.731 [2024-05-15 01:55:31.533157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.731 [2024-05-15 01:55:31.533317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.731 [2024-05-15 01:55:31.533372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.731 [2024-05-15 01:55:31.533345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.731 [2024-05-15 01:55:31.533375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:07.731 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.989 [2024-05-15 01:55:31.672751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.989 01:55:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:07.989 Malloc1 00:27:07.989 [2024-05-15 01:55:31.747651] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:07.989 [2024-05-15 01:55:31.747963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.989 Malloc2 00:27:07.989 Malloc3 00:27:07.989 Malloc4 00:27:07.989 Malloc5 00:27:08.265 Malloc6 00:27:08.265 Malloc7 00:27:08.265 Malloc8 00:27:08.265 Malloc9 00:27:08.265 Malloc10 00:27:08.265 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.265 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:08.265 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:08.265 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4139614 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4139614 /var/tmp/bdevperf.sock 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 4139614 ']' 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.540 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.540 { 00:27:08.540 "params": { 00:27:08.540 "name": "Nvme$subsystem", 00:27:08.540 "trtype": "$TEST_TRANSPORT", 00:27:08.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.540 "adrfam": "ipv4", 00:27:08.540 "trsvcid": "$NVMF_PORT", 00:27:08.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.540 "hdgst": ${hdgst:-false}, 00:27:08.540 "ddgst": ${ddgst:-false} 00:27:08.540 }, 00:27:08.540 "method": "bdev_nvme_attach_controller" 00:27:08.540 } 00:27:08.540 EOF 00:27:08.540 )") 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.541 { 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme$subsystem", 00:27:08.541 "trtype": "$TEST_TRANSPORT", 00:27:08.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "$NVMF_PORT", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.541 "hdgst": ${hdgst:-false}, 00:27:08.541 "ddgst": ${ddgst:-false} 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 } 00:27:08.541 EOF 00:27:08.541 )") 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.541 { 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme$subsystem", 00:27:08.541 "trtype": "$TEST_TRANSPORT", 00:27:08.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "$NVMF_PORT", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.541 "hdgst": ${hdgst:-false}, 00:27:08.541 "ddgst": ${ddgst:-false} 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 } 00:27:08.541 EOF 00:27:08.541 )") 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:08.541 01:55:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme1", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme2", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme3", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme4", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme5", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme6", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme7", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme8", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme9", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 },{ 00:27:08.541 "params": { 00:27:08.541 "name": "Nvme10", 00:27:08.541 "trtype": "tcp", 00:27:08.541 "traddr": "10.0.0.2", 00:27:08.541 "adrfam": "ipv4", 00:27:08.541 "trsvcid": "4420", 00:27:08.541 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:08.541 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:08.541 "hdgst": false, 00:27:08.541 "ddgst": false 00:27:08.541 }, 00:27:08.541 "method": "bdev_nvme_attach_controller" 00:27:08.541 }' 00:27:08.541 [2024-05-15 01:55:32.239847] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:08.541 [2024-05-15 01:55:32.239923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139614 ] 00:27:08.541 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.541 [2024-05-15 01:55:32.313162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.541 [2024-05-15 01:55:32.395133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.911 Running I/O for 10 seconds... 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:10.475 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4139493 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 4139493 ']' 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 4139493 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4139493 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4139493' 00:27:10.748 killing process with pid 4139493 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 4139493 00:27:10.748 01:55:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 4139493 00:27:10.748 [2024-05-15 01:55:34.561834] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:10.748 [2024-05-15 01:55:34.564470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.564988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.565333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944a60 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.566986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.748 [2024-05-15 01:55:34.567102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.567591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb90880 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.570989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.571361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9453a0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.572989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.573629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945840 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.749 [2024-05-15 01:55:34.574634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.574996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.575384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945ce0 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6680 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208f930 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3910 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:55:34.576777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 01:55:34.576859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with t[2024-05-15 01:55:34.576873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:10.750 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with t[2024-05-15 01:55:34.576889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14140 is same he state(5) to be set 00:27:10.750 with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.576974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.576987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.576999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:55:34.577000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-05-15 01:55:34.577016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:55:34.577032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 01:55:34.577049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:55:34.577065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2bd0 is same [2024-05-15 01:55:34.577086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with twith the state(5) to be set 00:27:10.750 he state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.577139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with t[2024-05-15 01:55:34.577151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:10.750 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.577168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with t[2024-05-15 01:55:34.577169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:27:10.750 id:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.577183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with t[2024-05-15 01:55:34.577184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:10.750 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.577198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.577211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.577242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.750 [2024-05-15 01:55:34.577243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.750 [2024-05-15 01:55:34.577256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.750 [2024-05-15 01:55:34.577259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0cb0 is same [2024-05-15 01:55:34.577272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with twith the state(5) to be set 00:27:10.751 he state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:55:34.577481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 he state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with t[2024-05-15 01:55:34.577630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:12he state(5) to be set 00:27:10.751 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946180 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.577663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.577970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.577984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.578973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.578989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1[2024-05-15 01:55:34.579204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 he state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:55:34.579228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 he state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t[2024-05-15 01:55:34.579269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:10.751 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.751 [2024-05-15 01:55:34.579285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.751 [2024-05-15 01:55:34.579288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.751 [2024-05-15 01:55:34.579299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1[2024-05-15 01:55:34.579325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 he state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t[2024-05-15 01:55:34.579355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1he state(5) to be set 00:27:10.752 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1[2024-05-15 01:55:34.579449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 he state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad5050 is same [2024-05-15 01:55:34.579479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with twith the state(5) to be set 00:27:10.752 he state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579569] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ad5050 was disconnected and fr[2024-05-15 01:55:34.579572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with teed. reset controller. 00:27:10.752 he state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t[2024-05-15 01:55:34.579694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128he state(5) to be set 00:27:10.752 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t[2024-05-15 01:55:34.579710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:10.752 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128[2024-05-15 01:55:34.579822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 he state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:55:34.579837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 he state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.579968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.579981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t[2024-05-15 01:55:34.579981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12he state(5) to be set 00:27:10.752 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.579995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with t[2024-05-15 01:55:34.579997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:10.752 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.580014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.580029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.580044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946640 is same with the state(5) to be set 00:27:10.752 [2024-05-15 01:55:34.580059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.752 [2024-05-15 01:55:34.580880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.752 [2024-05-15 01:55:34.580894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.580909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.580923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.580939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.580952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.580968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.580982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.580997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with t[2024-05-15 01:55:34.581249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:12he state(5) to be set 00:27:10.753 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:12[2024-05-15 01:55:34.581283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:55:34.581302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:12[2024-05-15 01:55:34.581451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:55:34.581466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 [2024-05-15 01:55:34.581585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 [2024-05-15 01:55:34.581598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:12[2024-05-15 01:55:34.581613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:55:34.581628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581724] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2081e20 was disconnected and fr[2024-05-15 01:55:34.581734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with teed. reset controller. 00:27:10.753 he state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.581992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946ae0 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.582996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.753 [2024-05-15 01:55:34.583488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.583647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x946f80 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.585138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:10.754 [2024-05-15 01:55:34.585174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:10.754 [2024-05-15 01:55:34.585201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef3910 (9): Bad file descriptor 00:27:10.754 [2024-05-15 01:55:34.585229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2bd0 (9): Bad file descriptor 00:27:10.754 [2024-05-15 01:55:34.586435] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.754 [2024-05-15 01:55:34.587013] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.754 [2024-05-15 01:55:34.587225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.754 [2024-05-15 01:55:34.587338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.754 [2024-05-15 01:55:34.587363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef2bd0 with addr=10.0.0.2, port=4420 00:27:10.754 [2024-05-15 01:55:34.587379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2bd0 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.587476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.754 [2024-05-15 01:55:34.587587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.754 [2024-05-15 01:55:34.587610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef3910 with addr=10.0.0.2, port=4420 00:27:10.754 [2024-05-15 01:55:34.587625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3910 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.587667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbea60 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.587836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.587948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.587961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe880 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.588010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ff610 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.588179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6680 (9): Bad file descriptor 00:27:10.754 [2024-05-15 01:55:34.588239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.754 [2024-05-15 01:55:34.588343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc91b0 is same with the state(5) to be set 00:27:10.754 [2024-05-15 01:55:34.588391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208f930 (9): Bad file descriptor 00:27:10.754 [2024-05-15 01:55:34.588426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f14140 (9): Bad file descriptor 00:27:10.754 [2024-05-15 01:55:34.588457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0cb0 (9): Bad file descriptor 00:27:10.754 [2024-05-15 01:55:34.588556] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.754 [2024-05-15 01:55:34.588626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.588976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.588992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.589975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.589989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.590006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.590021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.590036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.754 [2024-05-15 01:55:34.590051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.754 [2024-05-15 01:55:34.590067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.590613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.590628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2083370 is same with the state(5) to be set 00:27:10.755 [2024-05-15 01:55:34.590727] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2083370 was disconnected and freed. reset controller. 00:27:10.755 [2024-05-15 01:55:34.590937] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.755 [2024-05-15 01:55:34.591014] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.755 [2024-05-15 01:55:34.591090] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.755 [2024-05-15 01:55:34.591389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2bd0 (9): Bad file descriptor 00:27:10.755 [2024-05-15 01:55:34.591418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef3910 (9): Bad file descriptor 00:27:10.755 [2024-05-15 01:55:34.592637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.592978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.592994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.593979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.593995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.594010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.594029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.594044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.594061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.594075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.594091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.594105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.600128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.600179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.600196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.600211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.600237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.600252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.600268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.600283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.755 [2024-05-15 01:55:34.600299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.755 [2024-05-15 01:55:34.600314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.600539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.600555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2083550 is same with the state(5) to be set 00:27:10.756 [2024-05-15 01:55:34.600717] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2083550 was disconnected and freed. reset controller. 00:27:10.756 [2024-05-15 01:55:34.600923] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.756 [2024-05-15 01:55:34.601009] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:10.756 [2024-05-15 01:55:34.601060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:10.756 [2024-05-15 01:55:34.601136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:10.756 [2024-05-15 01:55:34.601155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:10.756 [2024-05-15 01:55:34.601174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:10.756 [2024-05-15 01:55:34.601196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:10.756 [2024-05-15 01:55:34.601210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:10.756 [2024-05-15 01:55:34.601232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:10.756 [2024-05-15 01:55:34.601293] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.756 [2024-05-15 01:55:34.601320] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.756 [2024-05-15 01:55:34.601352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbea60 (9): Bad file descriptor 00:27:10.756 [2024-05-15 01:55:34.601388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe880 (9): Bad file descriptor 00:27:10.756 [2024-05-15 01:55:34.601421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ff610 (9): Bad file descriptor 00:27:10.756 [2024-05-15 01:55:34.601452] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.756 [2024-05-15 01:55:34.601474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc91b0 (9): Bad file descriptor 00:27:10.756 [2024-05-15 01:55:34.602745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.756 [2024-05-15 01:55:34.602770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.756 [2024-05-15 01:55:34.602792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:10.756 [2024-05-15 01:55:34.602961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.756 [2024-05-15 01:55:34.603072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.756 [2024-05-15 01:55:34.603098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f14140 with addr=10.0.0.2, port=4420 00:27:10.756 [2024-05-15 01:55:34.603115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14140 is same with the state(5) to be set 00:27:10.756 [2024-05-15 01:55:34.603183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.603982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.603997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.604980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.604996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.605011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.756 [2024-05-15 01:55:34.605027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.756 [2024-05-15 01:55:34.605042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.605059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.605073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.605090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.605104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.605120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.605135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.605151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.605169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.605186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.605201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.605220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad3eb0 is same with the state(5) to be set 00:27:10.757 [2024-05-15 01:55:34.606781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.606807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.606828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.606844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.606876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.606892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.606907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.606931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.606945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.606961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.606975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.606991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.607983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.607999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.757 [2024-05-15 01:55:34.608817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.757 [2024-05-15 01:55:34.608833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2064d60 is same with the state(5) to be set 00:27:10.757 [2024-05-15 01:55:34.611261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:10.757 [2024-05-15 01:55:34.611295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:10.757 [2024-05-15 01:55:34.611528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.758 [2024-05-15 01:55:34.611641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.758 [2024-05-15 01:55:34.611667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc6680 with addr=10.0.0.2, port=4420 00:27:10.758 [2024-05-15 01:55:34.611684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6680 is same with the state(5) to be set 00:27:10.758 [2024-05-15 01:55:34.611710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f14140 (9): Bad file descriptor 00:27:10.758 [2024-05-15 01:55:34.611815] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.758 [2024-05-15 01:55:34.612311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.758 [2024-05-15 01:55:34.612425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.758 [2024-05-15 01:55:34.612453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad0cb0 with addr=10.0.0.2, port=4420 00:27:10.758 [2024-05-15 01:55:34.612470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0cb0 is same with the state(5) to be set 00:27:10.758 [2024-05-15 01:55:34.612567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.758 [2024-05-15 01:55:34.612660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.758 [2024-05-15 01:55:34.612685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208f930 with addr=10.0.0.2, port=4420 00:27:10.758 [2024-05-15 01:55:34.612701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208f930 is same with the state(5) to be set 00:27:10.758 [2024-05-15 01:55:34.612720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6680 (9): Bad file descriptor 00:27:10.758 [2024-05-15 01:55:34.612738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:10.758 [2024-05-15 01:55:34.612752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:10.758 [2024-05-15 01:55:34.612770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:10.758 [2024-05-15 01:55:34.613119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.613981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.613995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.614981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.614996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.615026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.615042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.615057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.615073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.615087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.615103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.615121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.615136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20848a0 is same with the state(5) to be set 00:27:10.758 [2024-05-15 01:55:34.616431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.616454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.616474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.616490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.616506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.616521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.758 [2024-05-15 01:55:34.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.758 [2024-05-15 01:55:34.616552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.616970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.616986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.617982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.617997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.618442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.618457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2085d80 is same with the state(5) to be set 00:27:10.759 [2024-05-15 01:55:34.619688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.759 [2024-05-15 01:55:34.619980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.759 [2024-05-15 01:55:34.619995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.620971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.620987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.621707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.621722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2087260 is same with the state(5) to be set 00:27:10.760 [2024-05-15 01:55:34.622954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.622977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.622998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.760 [2024-05-15 01:55:34.623641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.760 [2024-05-15 01:55:34.623657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.623970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.623990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.761 [2024-05-15 01:55:34.624933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.761 [2024-05-15 01:55:34.624948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2063910 is same with the state(5) to be set 00:27:10.761 [2024-05-15 01:55:34.626869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:10.761 [2024-05-15 01:55:34.626907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:10.761 [2024-05-15 01:55:34.626927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.761 [2024-05-15 01:55:34.626944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:10.761 [2024-05-15 01:55:34.626961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:10.761 [2024-05-15 01:55:34.627036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0cb0 (9): Bad file descriptor 00:27:10.761 [2024-05-15 01:55:34.627063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208f930 (9): Bad file descriptor 00:27:10.761 [2024-05-15 01:55:34.627081] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:10.761 [2024-05-15 01:55:34.627094] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:10.761 [2024-05-15 01:55:34.627113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:10.761 [2024-05-15 01:55:34.627146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.761 [2024-05-15 01:55:34.627211] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.761 [2024-05-15 01:55:34.627242] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.761 [2024-05-15 01:55:34.627275] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.761 [2024-05-15 01:55:34.627296] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.761 [2024-05-15 01:55:34.627415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:10.761 task offset: 16384 on job bdev=Nvme2n1 fails 00:27:10.761 00:27:10.761 Latency(us) 00:27:10.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.761 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme1n1 ended in about 0.79 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme1n1 : 0.79 162.20 10.14 81.10 0.00 259703.09 23495.87 267192.70 00:27:10.761 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme2n1 ended in about 0.77 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme2n1 : 0.77 167.03 10.44 83.51 0.00 246017.64 7427.41 271853.04 00:27:10.761 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme3n1 ended in about 0.77 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme3n1 : 0.77 166.79 10.42 83.39 0.00 240230.27 8349.77 265639.25 00:27:10.761 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme4n1 ended in about 0.78 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme4n1 : 0.78 165.07 10.32 82.54 0.00 236795.70 9854.67 240784.12 00:27:10.761 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme5n1 ended in about 0.79 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme5n1 : 0.79 175.69 10.98 75.11 0.00 227775.49 38253.61 228356.55 00:27:10.761 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme6n1 ended in about 0.80 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme6n1 : 0.80 160.19 10.01 80.09 0.00 232686.43 18641.35 267192.70 00:27:10.761 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme7n1 ended in about 0.80 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme7n1 : 0.80 159.54 9.97 79.77 0.00 227869.58 33593.27 246997.90 00:27:10.761 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme8n1 ended in about 0.81 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme8n1 : 0.81 158.89 9.93 79.45 0.00 222989.53 17185.00 264085.81 00:27:10.761 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme9n1 ended in about 0.81 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme9n1 : 0.81 79.13 4.95 79.13 0.00 327376.97 20486.07 306028.85 00:27:10.761 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.761 Job: Nvme10n1 ended in about 0.79 seconds with error 00:27:10.761 Verification LBA range: start 0x0 length 0x400 00:27:10.761 Nvme10n1 : 0.79 80.73 5.05 80.73 0.00 310434.32 20194.80 293601.28 00:27:10.761 =================================================================================================================== 00:27:10.761 Total : 1475.25 92.20 804.83 0.00 248436.13 7427.41 306028.85 00:27:10.761 [2024-05-15 01:55:34.653836] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:10.761 [2024-05-15 01:55:34.653931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:10.761 [2024-05-15 01:55:34.653968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.761 [2024-05-15 01:55:34.654300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.654421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.654463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef3910 with addr=10.0.0.2, port=4420 00:27:10.761 [2024-05-15 01:55:34.654485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef3910 is same with the state(5) to be set 00:27:10.761 [2024-05-15 01:55:34.654593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.654692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.654718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef2bd0 with addr=10.0.0.2, port=4420 00:27:10.761 [2024-05-15 01:55:34.654734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2bd0 is same with the state(5) to be set 00:27:10.761 [2024-05-15 01:55:34.654833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.654937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.654962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc91b0 with addr=10.0.0.2, port=4420 00:27:10.761 [2024-05-15 01:55:34.654977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc91b0 is same with the state(5) to be set 00:27:10.761 [2024-05-15 01:55:34.655081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.655175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.655200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ff610 with addr=10.0.0.2, port=4420 00:27:10.761 [2024-05-15 01:55:34.655221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ff610 is same with the state(5) to be set 00:27:10.761 [2024-05-15 01:55:34.655239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:10.761 [2024-05-15 01:55:34.655254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:10.761 [2024-05-15 01:55:34.655271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:10.761 [2024-05-15 01:55:34.655293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:10.761 [2024-05-15 01:55:34.655308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:10.761 [2024-05-15 01:55:34.655321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:10.761 [2024-05-15 01:55:34.656460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:10.761 [2024-05-15 01:55:34.656489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.761 [2024-05-15 01:55:34.656505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.761 [2024-05-15 01:55:34.656640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.656738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.656764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbe880 with addr=10.0.0.2, port=4420 00:27:10.761 [2024-05-15 01:55:34.656780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbe880 is same with the state(5) to be set 00:27:10.761 [2024-05-15 01:55:34.656876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.656974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.761 [2024-05-15 01:55:34.656999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbea60 with addr=10.0.0.2, port=4420 00:27:10.762 [2024-05-15 01:55:34.657014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbea60 is same with the state(5) to be set 00:27:10.762 [2024-05-15 01:55:34.657046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef3910 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.657071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2bd0 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.657089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc91b0 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.657106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ff610 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.657190] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.762 [2024-05-15 01:55:34.657224] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.762 [2024-05-15 01:55:34.657246] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.762 [2024-05-15 01:55:34.657265] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:10.762 [2024-05-15 01:55:34.657447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.657541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.657567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f14140 with addr=10.0.0.2, port=4420 00:27:10.762 [2024-05-15 01:55:34.657583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f14140 is same with the state(5) to be set 00:27:10.762 [2024-05-15 01:55:34.657602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbe880 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.657620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbea60 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.657636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.657649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.657663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:10.762 [2024-05-15 01:55:34.657681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.657695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.657708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:10.762 [2024-05-15 01:55:34.657725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.657738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.657751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:10.762 [2024-05-15 01:55:34.657767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.657781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.657794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:10.762 [2024-05-15 01:55:34.657875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:10.762 [2024-05-15 01:55:34.657900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:10.762 [2024-05-15 01:55:34.657917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:10.762 [2024-05-15 01:55:34.657932] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.657950] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.657962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.657974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.658010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f14140 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.658030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.658043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.658056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:10.762 [2024-05-15 01:55:34.658073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.658087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.658100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:10.762 [2024-05-15 01:55:34.658138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.658156] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.658242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.658333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.658358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc6680 with addr=10.0.0.2, port=4420 00:27:10.762 [2024-05-15 01:55:34.658374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6680 is same with the state(5) to be set 00:27:10.762 [2024-05-15 01:55:34.658457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.658559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.658584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208f930 with addr=10.0.0.2, port=4420 00:27:10.762 [2024-05-15 01:55:34.658599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208f930 is same with the state(5) to be set 00:27:10.762 [2024-05-15 01:55:34.658681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.658775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.762 [2024-05-15 01:55:34.658799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad0cb0 with addr=10.0.0.2, port=4420 00:27:10.762 [2024-05-15 01:55:34.658814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0cb0 is same with the state(5) to be set 00:27:10.762 [2024-05-15 01:55:34.658829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.658842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.658855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:10.762 [2024-05-15 01:55:34.658896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.658919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6680 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.658939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208f930 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.658957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0cb0 (9): Bad file descriptor 00:27:10.762 [2024-05-15 01:55:34.659018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.659038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.659052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:10.762 [2024-05-15 01:55:34.659069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.659083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.659096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:10.762 [2024-05-15 01:55:34.659111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:10.762 [2024-05-15 01:55:34.659125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:10.762 [2024-05-15 01:55:34.659138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:10.762 [2024-05-15 01:55:34.659177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.659195] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:10.762 [2024-05-15 01:55:34.659207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.328 01:55:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:11.328 01:55:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4139614 00:27:12.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4139614) - No such process 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.264 rmmod nvme_tcp 00:27:12.264 rmmod nvme_fabrics 00:27:12.264 rmmod nvme_keyring 00:27:12.264 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.588 01:55:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.486 01:55:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.486 00:27:14.486 real 0m7.082s 00:27:14.486 user 0m16.421s 00:27:14.486 sys 0m1.357s 00:27:14.486 01:55:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:14.486 01:55:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:14.486 ************************************ 00:27:14.486 END TEST nvmf_shutdown_tc3 00:27:14.486 ************************************ 00:27:14.487 01:55:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:14.487 00:27:14.487 real 0m26.721s 00:27:14.487 user 1m12.265s 00:27:14.487 sys 0m6.240s 00:27:14.487 01:55:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:14.487 01:55:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:14.487 ************************************ 00:27:14.487 END TEST nvmf_shutdown 00:27:14.487 ************************************ 00:27:14.487 01:55:38 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.487 01:55:38 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.487 01:55:38 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:27:14.487 01:55:38 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:14.487 01:55:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.487 ************************************ 00:27:14.487 START TEST nvmf_multicontroller 00:27:14.487 ************************************ 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:14.487 * Looking for test storage... 00:27:14.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.487 01:55:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:17.015 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:17.015 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:17.015 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:17.016 Found net devices under 0000:09:00.0: cvl_0_0 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:17.016 Found net devices under 0000:09:00.1: cvl_0_1 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:27:17.016 00:27:17.016 --- 10.0.0.2 ping statistics --- 00:27:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.016 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:27:17.016 00:27:17.016 --- 10.0.0.1 ping statistics --- 00:27:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.016 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4142360 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4142360 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 4142360 ']' 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:17.016 01:55:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.274 [2024-05-15 01:55:40.989605] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:17.274 [2024-05-15 01:55:40.989681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.274 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.274 [2024-05-15 01:55:41.068678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:17.274 [2024-05-15 01:55:41.154119] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.274 [2024-05-15 01:55:41.154178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.274 [2024-05-15 01:55:41.154211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.274 [2024-05-15 01:55:41.154241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.274 [2024-05-15 01:55:41.154266] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.275 [2024-05-15 01:55:41.154333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.275 [2024-05-15 01:55:41.154426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.275 [2024-05-15 01:55:41.154429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.532 [2024-05-15 01:55:41.299623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.532 Malloc0 00:27:17.532 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 [2024-05-15 01:55:41.360910] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:17.533 [2024-05-15 01:55:41.361241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 [2024-05-15 01:55:41.369043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 Malloc1 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4142496 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4142496 /var/tmp/bdevperf.sock 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 4142496 ']' 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:17.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:17.533 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:17.791 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:17.791 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:27:17.791 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:17.791 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.791 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.049 NVMe0n1 00:27:18.049 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.049 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:18.049 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.050 1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 request: 00:27:18.050 { 00:27:18.050 "name": "NVMe0", 00:27:18.050 "trtype": "tcp", 00:27:18.050 "traddr": "10.0.0.2", 00:27:18.050 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:18.050 "hostaddr": "10.0.0.2", 00:27:18.050 "hostsvcid": "60000", 00:27:18.050 "adrfam": "ipv4", 00:27:18.050 "trsvcid": "4420", 00:27:18.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.050 "method": "bdev_nvme_attach_controller", 00:27:18.050 "req_id": 1 00:27:18.050 } 00:27:18.050 Got JSON-RPC error response 00:27:18.050 response: 00:27:18.050 { 00:27:18.050 "code": -114, 00:27:18.050 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:18.050 } 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 request: 00:27:18.050 { 00:27:18.050 "name": "NVMe0", 00:27:18.050 "trtype": "tcp", 00:27:18.050 "traddr": "10.0.0.2", 00:27:18.050 "hostaddr": "10.0.0.2", 00:27:18.050 "hostsvcid": "60000", 00:27:18.050 "adrfam": "ipv4", 00:27:18.050 "trsvcid": "4420", 00:27:18.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:18.050 "method": "bdev_nvme_attach_controller", 00:27:18.050 "req_id": 1 00:27:18.050 } 00:27:18.050 Got JSON-RPC error response 00:27:18.050 response: 00:27:18.050 { 00:27:18.050 "code": -114, 00:27:18.050 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:18.050 } 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 request: 00:27:18.050 { 00:27:18.050 "name": "NVMe0", 00:27:18.050 "trtype": "tcp", 00:27:18.050 "traddr": "10.0.0.2", 00:27:18.050 "hostaddr": "10.0.0.2", 00:27:18.050 "hostsvcid": "60000", 00:27:18.050 "adrfam": "ipv4", 00:27:18.050 "trsvcid": "4420", 00:27:18.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.050 "multipath": "disable", 00:27:18.050 "method": "bdev_nvme_attach_controller", 00:27:18.050 "req_id": 1 00:27:18.050 } 00:27:18.050 Got JSON-RPC error response 00:27:18.050 response: 00:27:18.050 { 00:27:18.050 "code": -114, 00:27:18.050 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:18.050 } 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 request: 00:27:18.050 { 00:27:18.050 "name": "NVMe0", 00:27:18.050 "trtype": "tcp", 00:27:18.050 "traddr": "10.0.0.2", 00:27:18.050 "hostaddr": "10.0.0.2", 00:27:18.050 "hostsvcid": "60000", 00:27:18.050 "adrfam": "ipv4", 00:27:18.050 "trsvcid": "4420", 00:27:18.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.050 "multipath": "failover", 00:27:18.050 "method": "bdev_nvme_attach_controller", 00:27:18.050 "req_id": 1 00:27:18.050 } 00:27:18.050 Got JSON-RPC error response 00:27:18.050 response: 00:27:18.050 { 00:27:18.050 "code": -114, 00:27:18.050 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:18.050 } 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.050 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.051 01:55:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:18.051 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.051 01:55:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.307 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:18.307 01:55:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:19.679 0 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4142496 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 4142496 ']' 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 4142496 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4142496 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4142496' 00:27:19.679 killing process with pid 4142496 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 4142496 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 4142496 00:27:19.679 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:27:19.680 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:19.680 [2024-05-15 01:55:41.468867] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:19.680 [2024-05-15 01:55:41.468950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142496 ] 00:27:19.680 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.680 [2024-05-15 01:55:41.538156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.680 [2024-05-15 01:55:41.621551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.680 [2024-05-15 01:55:42.150032] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 906048d4-2b2e-4a99-a79e-3839a7b6708e already exists 00:27:19.680 [2024-05-15 01:55:42.150069] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:906048d4-2b2e-4a99-a79e-3839a7b6708e alias for bdev NVMe1n1 00:27:19.680 [2024-05-15 01:55:42.150103] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:19.680 Running I/O for 1 seconds... 00:27:19.680 00:27:19.680 Latency(us) 00:27:19.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.680 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:19.680 NVMe0n1 : 1.00 17195.40 67.17 0.00 0.00 7431.92 4757.43 13592.65 00:27:19.680 =================================================================================================================== 00:27:19.680 Total : 17195.40 67.17 0.00 0.00 7431.92 4757.43 13592.65 00:27:19.680 Received shutdown signal, test time was about 1.000000 seconds 00:27:19.680 00:27:19.680 Latency(us) 00:27:19.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.680 =================================================================================================================== 00:27:19.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.680 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.680 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.680 rmmod nvme_tcp 00:27:19.680 rmmod nvme_fabrics 00:27:19.938 rmmod nvme_keyring 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4142360 ']' 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4142360 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 4142360 ']' 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 4142360 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4142360 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4142360' 00:27:19.938 killing process with pid 4142360 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 4142360 00:27:19.938 [2024-05-15 01:55:43.655330] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:19.938 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 4142360 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.196 01:55:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.094 01:55:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:22.094 00:27:22.094 real 0m7.618s 00:27:22.094 user 0m11.064s 00:27:22.094 sys 0m2.580s 00:27:22.094 01:55:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:22.094 01:55:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.094 ************************************ 00:27:22.094 END TEST nvmf_multicontroller 00:27:22.094 ************************************ 00:27:22.094 01:55:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:22.094 01:55:45 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:22.094 01:55:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:22.094 01:55:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.094 ************************************ 00:27:22.094 START TEST nvmf_aer 00:27:22.094 ************************************ 00:27:22.094 01:55:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:22.352 * Looking for test storage... 00:27:22.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.352 01:55:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.353 01:55:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:24.881 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:24.881 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:24.881 Found net devices under 0000:09:00.0: cvl_0_0 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.881 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:24.882 Found net devices under 0000:09:00.1: cvl_0_1 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:27:24.882 00:27:24.882 --- 10.0.0.2 ping statistics --- 00:27:24.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.882 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:27:24.882 00:27:24.882 --- 10.0.0.1 ping statistics --- 00:27:24.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.882 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4145001 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4145001 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 4145001 ']' 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:24.882 01:55:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.140 [2024-05-15 01:55:48.848851] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:25.140 [2024-05-15 01:55:48.848933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.140 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.140 [2024-05-15 01:55:48.929804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.140 [2024-05-15 01:55:49.017178] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.140 [2024-05-15 01:55:49.017245] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.140 [2024-05-15 01:55:49.017263] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.140 [2024-05-15 01:55:49.017276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.140 [2024-05-15 01:55:49.017288] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.140 [2024-05-15 01:55:49.017386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.140 [2024-05-15 01:55:49.017458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.140 [2024-05-15 01:55:49.017617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.140 [2024-05-15 01:55:49.017620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.397 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 [2024-05-15 01:55:49.173897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 Malloc0 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 [2024-05-15 01:55:49.227224] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:25.398 [2024-05-15 01:55:49.227552] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.398 [ 00:27:25.398 { 00:27:25.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.398 "subtype": "Discovery", 00:27:25.398 "listen_addresses": [], 00:27:25.398 "allow_any_host": true, 00:27:25.398 "hosts": [] 00:27:25.398 }, 00:27:25.398 { 00:27:25.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.398 "subtype": "NVMe", 00:27:25.398 "listen_addresses": [ 00:27:25.398 { 00:27:25.398 "trtype": "TCP", 00:27:25.398 "adrfam": "IPv4", 00:27:25.398 "traddr": "10.0.0.2", 00:27:25.398 "trsvcid": "4420" 00:27:25.398 } 00:27:25.398 ], 00:27:25.398 "allow_any_host": true, 00:27:25.398 "hosts": [], 00:27:25.398 "serial_number": "SPDK00000000000001", 00:27:25.398 "model_number": "SPDK bdev Controller", 00:27:25.398 "max_namespaces": 2, 00:27:25.398 "min_cntlid": 1, 00:27:25.398 "max_cntlid": 65519, 00:27:25.398 "namespaces": [ 00:27:25.398 { 00:27:25.398 "nsid": 1, 00:27:25.398 "bdev_name": "Malloc0", 00:27:25.398 "name": "Malloc0", 00:27:25.398 "nguid": "C58AA3C3CD134C8CBDBB088B41A8E6B6", 00:27:25.398 "uuid": "c58aa3c3-cd13-4c8c-bdbb-088b41a8e6b6" 00:27:25.398 } 00:27:25.398 ] 00:27:25.398 } 00:27:25.398 ] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4145036 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:27:25.398 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:27:25.398 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.668 Malloc1 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.668 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.669 Asynchronous Event Request test 00:27:25.669 Attaching to 10.0.0.2 00:27:25.669 Attached to 10.0.0.2 00:27:25.669 Registering asynchronous event callbacks... 00:27:25.669 Starting namespace attribute notice tests for all controllers... 00:27:25.669 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:25.669 aer_cb - Changed Namespace 00:27:25.669 Cleaning up... 00:27:25.669 [ 00:27:25.669 { 00:27:25.669 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.669 "subtype": "Discovery", 00:27:25.669 "listen_addresses": [], 00:27:25.669 "allow_any_host": true, 00:27:25.669 "hosts": [] 00:27:25.669 }, 00:27:25.669 { 00:27:25.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.669 "subtype": "NVMe", 00:27:25.669 "listen_addresses": [ 00:27:25.669 { 00:27:25.669 "trtype": "TCP", 00:27:25.669 "adrfam": "IPv4", 00:27:25.669 "traddr": "10.0.0.2", 00:27:25.669 "trsvcid": "4420" 00:27:25.669 } 00:27:25.669 ], 00:27:25.669 "allow_any_host": true, 00:27:25.669 "hosts": [], 00:27:25.669 "serial_number": "SPDK00000000000001", 00:27:25.669 "model_number": "SPDK bdev Controller", 00:27:25.669 "max_namespaces": 2, 00:27:25.669 "min_cntlid": 1, 00:27:25.669 "max_cntlid": 65519, 00:27:25.669 "namespaces": [ 00:27:25.669 { 00:27:25.669 "nsid": 1, 00:27:25.669 "bdev_name": "Malloc0", 00:27:25.669 "name": "Malloc0", 00:27:25.669 "nguid": "C58AA3C3CD134C8CBDBB088B41A8E6B6", 00:27:25.669 "uuid": "c58aa3c3-cd13-4c8c-bdbb-088b41a8e6b6" 00:27:25.669 }, 00:27:25.669 { 00:27:25.669 "nsid": 2, 00:27:25.669 "bdev_name": "Malloc1", 00:27:25.669 "name": "Malloc1", 00:27:25.669 "nguid": "E71D81226A7E4A1593F65277F32D1164", 00:27:25.669 "uuid": "e71d8122-6a7e-4a15-93f6-5277f32d1164" 00:27:25.669 } 00:27:25.669 ] 00:27:25.669 } 00:27:25.669 ] 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4145036 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.669 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.669 rmmod nvme_tcp 00:27:25.926 rmmod nvme_fabrics 00:27:25.926 rmmod nvme_keyring 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4145001 ']' 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4145001 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 4145001 ']' 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 4145001 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4145001 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4145001' 00:27:25.926 killing process with pid 4145001 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 4145001 00:27:25.926 [2024-05-15 01:55:49.654010] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:25.926 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 4145001 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.185 01:55:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.135 01:55:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.136 00:27:28.136 real 0m5.899s 00:27:28.136 user 0m4.177s 00:27:28.136 sys 0m2.349s 00:27:28.136 01:55:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:28.136 01:55:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.136 ************************************ 00:27:28.136 END TEST nvmf_aer 00:27:28.136 ************************************ 00:27:28.136 01:55:51 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:28.136 01:55:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:28.136 01:55:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:28.136 01:55:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.136 ************************************ 00:27:28.136 START TEST nvmf_async_init 00:27:28.136 ************************************ 00:27:28.136 01:55:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:28.136 * Looking for test storage... 00:27:28.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=22ec2826dba9427da95e7ae026fcd0ce 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:28.136 01:55:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:30.664 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:30.664 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:30.664 Found net devices under 0000:09:00.0: cvl_0_0 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:30.664 Found net devices under 0000:09:00.1: cvl_0_1 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.664 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:30.665 00:27:30.665 --- 10.0.0.2 ping statistics --- 00:27:30.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.665 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:27:30.665 00:27:30.665 --- 10.0.0.1 ping statistics --- 00:27:30.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.665 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:30.665 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4147379 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4147379 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 4147379 ']' 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:30.923 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:30.923 [2024-05-15 01:55:54.640834] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:30.923 [2024-05-15 01:55:54.640913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.923 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.923 [2024-05-15 01:55:54.719334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.923 [2024-05-15 01:55:54.804159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.923 [2024-05-15 01:55:54.804228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.923 [2024-05-15 01:55:54.804247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.923 [2024-05-15 01:55:54.804261] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.923 [2024-05-15 01:55:54.804273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.923 [2024-05-15 01:55:54.804313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 [2024-05-15 01:55:54.956095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 null0 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 22ec2826dba9427da95e7ae026fcd0ce 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.181 [2024-05-15 01:55:54.996128] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:31.181 [2024-05-15 01:55:54.996426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.181 01:55:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.438 nvme0n1 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.438 [ 00:27:31.438 { 00:27:31.438 "name": "nvme0n1", 00:27:31.438 "aliases": [ 00:27:31.438 "22ec2826-dba9-427d-a95e-7ae026fcd0ce" 00:27:31.438 ], 00:27:31.438 "product_name": "NVMe disk", 00:27:31.438 "block_size": 512, 00:27:31.438 "num_blocks": 2097152, 00:27:31.438 "uuid": "22ec2826-dba9-427d-a95e-7ae026fcd0ce", 00:27:31.438 "assigned_rate_limits": { 00:27:31.438 "rw_ios_per_sec": 0, 00:27:31.438 "rw_mbytes_per_sec": 0, 00:27:31.438 "r_mbytes_per_sec": 0, 00:27:31.438 "w_mbytes_per_sec": 0 00:27:31.438 }, 00:27:31.438 "claimed": false, 00:27:31.438 "zoned": false, 00:27:31.438 "supported_io_types": { 00:27:31.438 "read": true, 00:27:31.438 "write": true, 00:27:31.438 "unmap": false, 00:27:31.438 "write_zeroes": true, 00:27:31.438 "flush": true, 00:27:31.438 "reset": true, 00:27:31.438 "compare": true, 00:27:31.438 "compare_and_write": true, 00:27:31.438 "abort": true, 00:27:31.438 "nvme_admin": true, 00:27:31.438 "nvme_io": true 00:27:31.438 }, 00:27:31.438 "memory_domains": [ 00:27:31.438 { 00:27:31.438 "dma_device_id": "system", 00:27:31.438 "dma_device_type": 1 00:27:31.438 } 00:27:31.438 ], 00:27:31.438 "driver_specific": { 00:27:31.438 "nvme": [ 00:27:31.438 { 00:27:31.438 "trid": { 00:27:31.438 "trtype": "TCP", 00:27:31.438 "adrfam": "IPv4", 00:27:31.438 "traddr": "10.0.0.2", 00:27:31.438 "trsvcid": "4420", 00:27:31.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:31.438 }, 00:27:31.438 "ctrlr_data": { 00:27:31.438 "cntlid": 1, 00:27:31.438 "vendor_id": "0x8086", 00:27:31.438 "model_number": "SPDK bdev Controller", 00:27:31.438 "serial_number": "00000000000000000000", 00:27:31.438 "firmware_revision": "24.05", 00:27:31.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.438 "oacs": { 00:27:31.438 "security": 0, 00:27:31.438 "format": 0, 00:27:31.438 "firmware": 0, 00:27:31.438 "ns_manage": 0 00:27:31.438 }, 00:27:31.438 "multi_ctrlr": true, 00:27:31.438 "ana_reporting": false 00:27:31.438 }, 00:27:31.438 "vs": { 00:27:31.438 "nvme_version": "1.3" 00:27:31.438 }, 00:27:31.438 "ns_data": { 00:27:31.438 "id": 1, 00:27:31.438 "can_share": true 00:27:31.438 } 00:27:31.438 } 00:27:31.438 ], 00:27:31.438 "mp_policy": "active_passive" 00:27:31.438 } 00:27:31.438 } 00:27:31.438 ] 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.438 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.438 [2024-05-15 01:55:55.244929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:31.438 [2024-05-15 01:55:55.245019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc3a80 (9): Bad file descriptor 00:27:31.696 [2024-05-15 01:55:55.377399] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:31.696 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.696 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:31.696 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.696 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.696 [ 00:27:31.696 { 00:27:31.696 "name": "nvme0n1", 00:27:31.696 "aliases": [ 00:27:31.696 "22ec2826-dba9-427d-a95e-7ae026fcd0ce" 00:27:31.696 ], 00:27:31.696 "product_name": "NVMe disk", 00:27:31.696 "block_size": 512, 00:27:31.696 "num_blocks": 2097152, 00:27:31.696 "uuid": "22ec2826-dba9-427d-a95e-7ae026fcd0ce", 00:27:31.696 "assigned_rate_limits": { 00:27:31.696 "rw_ios_per_sec": 0, 00:27:31.696 "rw_mbytes_per_sec": 0, 00:27:31.696 "r_mbytes_per_sec": 0, 00:27:31.696 "w_mbytes_per_sec": 0 00:27:31.696 }, 00:27:31.696 "claimed": false, 00:27:31.696 "zoned": false, 00:27:31.696 "supported_io_types": { 00:27:31.696 "read": true, 00:27:31.696 "write": true, 00:27:31.696 "unmap": false, 00:27:31.696 "write_zeroes": true, 00:27:31.696 "flush": true, 00:27:31.696 "reset": true, 00:27:31.696 "compare": true, 00:27:31.696 "compare_and_write": true, 00:27:31.696 "abort": true, 00:27:31.696 "nvme_admin": true, 00:27:31.696 "nvme_io": true 00:27:31.696 }, 00:27:31.696 "memory_domains": [ 00:27:31.697 { 00:27:31.697 "dma_device_id": "system", 00:27:31.697 "dma_device_type": 1 00:27:31.697 } 00:27:31.697 ], 00:27:31.697 "driver_specific": { 00:27:31.697 "nvme": [ 00:27:31.697 { 00:27:31.697 "trid": { 00:27:31.697 "trtype": "TCP", 00:27:31.697 "adrfam": "IPv4", 00:27:31.697 "traddr": "10.0.0.2", 00:27:31.697 "trsvcid": "4420", 00:27:31.697 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:31.697 }, 00:27:31.697 "ctrlr_data": { 00:27:31.697 "cntlid": 2, 00:27:31.697 "vendor_id": "0x8086", 00:27:31.697 "model_number": "SPDK bdev Controller", 00:27:31.697 "serial_number": "00000000000000000000", 00:27:31.697 "firmware_revision": "24.05", 00:27:31.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.697 "oacs": { 00:27:31.697 "security": 0, 00:27:31.697 "format": 0, 00:27:31.697 "firmware": 0, 00:27:31.697 "ns_manage": 0 00:27:31.697 }, 00:27:31.697 "multi_ctrlr": true, 00:27:31.697 "ana_reporting": false 00:27:31.697 }, 00:27:31.697 "vs": { 00:27:31.697 "nvme_version": "1.3" 00:27:31.697 }, 00:27:31.697 "ns_data": { 00:27:31.697 "id": 1, 00:27:31.697 "can_share": true 00:27:31.697 } 00:27:31.697 } 00:27:31.697 ], 00:27:31.697 "mp_policy": "active_passive" 00:27:31.697 } 00:27:31.697 } 00:27:31.697 ] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WrSnKWsl4U 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WrSnKWsl4U 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 [2024-05-15 01:55:55.429602] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:31.697 [2024-05-15 01:55:55.429744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WrSnKWsl4U 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 [2024-05-15 01:55:55.437625] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WrSnKWsl4U 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 [2024-05-15 01:55:55.445634] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:31.697 [2024-05-15 01:55:55.445691] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:31.697 nvme0n1 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 [ 00:27:31.697 { 00:27:31.697 "name": "nvme0n1", 00:27:31.697 "aliases": [ 00:27:31.697 "22ec2826-dba9-427d-a95e-7ae026fcd0ce" 00:27:31.697 ], 00:27:31.697 "product_name": "NVMe disk", 00:27:31.697 "block_size": 512, 00:27:31.697 "num_blocks": 2097152, 00:27:31.697 "uuid": "22ec2826-dba9-427d-a95e-7ae026fcd0ce", 00:27:31.697 "assigned_rate_limits": { 00:27:31.697 "rw_ios_per_sec": 0, 00:27:31.697 "rw_mbytes_per_sec": 0, 00:27:31.697 "r_mbytes_per_sec": 0, 00:27:31.697 "w_mbytes_per_sec": 0 00:27:31.697 }, 00:27:31.697 "claimed": false, 00:27:31.697 "zoned": false, 00:27:31.697 "supported_io_types": { 00:27:31.697 "read": true, 00:27:31.697 "write": true, 00:27:31.697 "unmap": false, 00:27:31.697 "write_zeroes": true, 00:27:31.697 "flush": true, 00:27:31.697 "reset": true, 00:27:31.697 "compare": true, 00:27:31.697 "compare_and_write": true, 00:27:31.697 "abort": true, 00:27:31.697 "nvme_admin": true, 00:27:31.697 "nvme_io": true 00:27:31.697 }, 00:27:31.697 "memory_domains": [ 00:27:31.697 { 00:27:31.697 "dma_device_id": "system", 00:27:31.697 "dma_device_type": 1 00:27:31.697 } 00:27:31.697 ], 00:27:31.697 "driver_specific": { 00:27:31.697 "nvme": [ 00:27:31.697 { 00:27:31.697 "trid": { 00:27:31.697 "trtype": "TCP", 00:27:31.697 "adrfam": "IPv4", 00:27:31.697 "traddr": "10.0.0.2", 00:27:31.697 "trsvcid": "4421", 00:27:31.697 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:31.697 }, 00:27:31.697 "ctrlr_data": { 00:27:31.697 "cntlid": 3, 00:27:31.697 "vendor_id": "0x8086", 00:27:31.697 "model_number": "SPDK bdev Controller", 00:27:31.697 "serial_number": "00000000000000000000", 00:27:31.697 "firmware_revision": "24.05", 00:27:31.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.697 "oacs": { 00:27:31.697 "security": 0, 00:27:31.697 "format": 0, 00:27:31.697 "firmware": 0, 00:27:31.697 "ns_manage": 0 00:27:31.697 }, 00:27:31.697 "multi_ctrlr": true, 00:27:31.697 "ana_reporting": false 00:27:31.697 }, 00:27:31.697 "vs": { 00:27:31.697 "nvme_version": "1.3" 00:27:31.697 }, 00:27:31.697 "ns_data": { 00:27:31.697 "id": 1, 00:27:31.697 "can_share": true 00:27:31.697 } 00:27:31.697 } 00:27:31.697 ], 00:27:31.697 "mp_policy": "active_passive" 00:27:31.697 } 00:27:31.697 } 00:27:31.697 ] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.WrSnKWsl4U 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.697 rmmod nvme_tcp 00:27:31.697 rmmod nvme_fabrics 00:27:31.697 rmmod nvme_keyring 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4147379 ']' 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4147379 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 4147379 ']' 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 4147379 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:31.697 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4147379 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4147379' 00:27:31.955 killing process with pid 4147379 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 4147379 00:27:31.955 [2024-05-15 01:55:55.629605] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:31.955 [2024-05-15 01:55:55.629641] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:31.955 [2024-05-15 01:55:55.629671] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 4147379 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.955 01:55:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.485 01:55:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.485 00:27:34.485 real 0m5.922s 00:27:34.485 user 0m2.188s 00:27:34.485 sys 0m2.130s 00:27:34.486 01:55:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:34.486 01:55:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 ************************************ 00:27:34.486 END TEST nvmf_async_init 00:27:34.486 ************************************ 00:27:34.486 01:55:57 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:34.486 01:55:57 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:34.486 01:55:57 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:34.486 01:55:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 ************************************ 00:27:34.486 START TEST dma 00:27:34.486 ************************************ 00:27:34.486 01:55:57 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:34.486 * Looking for test storage... 00:27:34.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:34.486 01:55:57 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.486 01:55:57 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.486 01:55:57 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.486 01:55:57 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.486 01:55:57 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:57 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:57 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:57 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:27:34.486 01:55:57 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.486 01:55:57 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.486 01:55:57 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:34.486 01:55:57 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:27:34.486 00:27:34.486 real 0m0.067s 00:27:34.486 user 0m0.030s 00:27:34.486 sys 0m0.041s 00:27:34.486 01:55:57 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:34.486 01:55:57 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 ************************************ 00:27:34.486 END TEST dma 00:27:34.486 ************************************ 00:27:34.486 01:55:58 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:34.486 01:55:58 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:34.486 01:55:58 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:34.486 01:55:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.486 ************************************ 00:27:34.486 START TEST nvmf_identify 00:27:34.486 ************************************ 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:34.486 * Looking for test storage... 00:27:34.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:34.486 01:55:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:37.013 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:37.013 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:37.013 Found net devices under 0000:09:00.0: cvl_0_0 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:37.013 Found net devices under 0000:09:00.1: cvl_0_1 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:27:37.013 00:27:37.013 --- 10.0.0.2 ping statistics --- 00:27:37.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.013 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:27:37.013 00:27:37.013 --- 10.0.0.1 ping statistics --- 00:27:37.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.013 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.013 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4149801 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4149801 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 4149801 ']' 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:37.014 01:56:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.014 [2024-05-15 01:56:00.743867] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:37.014 [2024-05-15 01:56:00.743939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.014 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.014 [2024-05-15 01:56:00.826480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.014 [2024-05-15 01:56:00.914508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.014 [2024-05-15 01:56:00.914570] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.014 [2024-05-15 01:56:00.914587] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.014 [2024-05-15 01:56:00.914600] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.014 [2024-05-15 01:56:00.914612] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.014 [2024-05-15 01:56:00.914696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.014 [2024-05-15 01:56:00.914764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.014 [2024-05-15 01:56:00.914861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.014 [2024-05-15 01:56:00.914864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 [2024-05-15 01:56:01.046811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 Malloc0 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 [2024-05-15 01:56:01.124038] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:37.272 [2024-05-15 01:56:01.124408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.272 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.272 [ 00:27:37.272 { 00:27:37.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:37.272 "subtype": "Discovery", 00:27:37.272 "listen_addresses": [ 00:27:37.272 { 00:27:37.272 "trtype": "TCP", 00:27:37.272 "adrfam": "IPv4", 00:27:37.272 "traddr": "10.0.0.2", 00:27:37.272 "trsvcid": "4420" 00:27:37.272 } 00:27:37.272 ], 00:27:37.272 "allow_any_host": true, 00:27:37.272 "hosts": [] 00:27:37.272 }, 00:27:37.272 { 00:27:37.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.272 "subtype": "NVMe", 00:27:37.272 "listen_addresses": [ 00:27:37.272 { 00:27:37.272 "trtype": "TCP", 00:27:37.272 "adrfam": "IPv4", 00:27:37.272 "traddr": "10.0.0.2", 00:27:37.272 "trsvcid": "4420" 00:27:37.272 } 00:27:37.272 ], 00:27:37.272 "allow_any_host": true, 00:27:37.272 "hosts": [], 00:27:37.272 "serial_number": "SPDK00000000000001", 00:27:37.272 "model_number": "SPDK bdev Controller", 00:27:37.272 "max_namespaces": 32, 00:27:37.272 "min_cntlid": 1, 00:27:37.272 "max_cntlid": 65519, 00:27:37.272 "namespaces": [ 00:27:37.272 { 00:27:37.272 "nsid": 1, 00:27:37.272 "bdev_name": "Malloc0", 00:27:37.272 "name": "Malloc0", 00:27:37.272 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:37.272 "eui64": "ABCDEF0123456789", 00:27:37.272 "uuid": "23fd0972-947e-4c08-a22d-01998b16038a" 00:27:37.272 } 00:27:37.272 ] 00:27:37.272 } 00:27:37.272 ] 00:27:37.273 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.273 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:37.273 [2024-05-15 01:56:01.165260] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:37.273 [2024-05-15 01:56:01.165303] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149938 ] 00:27:37.273 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.273 [2024-05-15 01:56:01.201791] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:37.273 [2024-05-15 01:56:01.201846] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:37.273 [2024-05-15 01:56:01.201856] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:37.273 [2024-05-15 01:56:01.201872] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:37.273 [2024-05-15 01:56:01.201885] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:37.273 [2024-05-15 01:56:01.202136] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:37.273 [2024-05-15 01:56:01.202183] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15c9120 0 00:27:37.540 [2024-05-15 01:56:01.208231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:37.540 [2024-05-15 01:56:01.208260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:37.540 [2024-05-15 01:56:01.208271] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:37.540 [2024-05-15 01:56:01.208278] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:37.540 [2024-05-15 01:56:01.208343] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.208357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.208365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.208390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:37.540 [2024-05-15 01:56:01.208420] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.540 [2024-05-15 01:56:01.216227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.540 [2024-05-15 01:56:01.216246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.540 [2024-05-15 01:56:01.216253] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.540 [2024-05-15 01:56:01.216299] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:37.540 [2024-05-15 01:56:01.216312] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:37.540 [2024-05-15 01:56:01.216322] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:37.540 [2024-05-15 01:56:01.216343] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216352] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.216370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.540 [2024-05-15 01:56:01.216395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.540 [2024-05-15 01:56:01.216539] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.540 [2024-05-15 01:56:01.216554] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.540 [2024-05-15 01:56:01.216562] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216569] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.540 [2024-05-15 01:56:01.216581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:37.540 [2024-05-15 01:56:01.216594] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:37.540 [2024-05-15 01:56:01.216607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216621] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.216632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.540 [2024-05-15 01:56:01.216654] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.540 [2024-05-15 01:56:01.216746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.540 [2024-05-15 01:56:01.216758] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.540 [2024-05-15 01:56:01.216766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.540 [2024-05-15 01:56:01.216783] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:37.540 [2024-05-15 01:56:01.216797] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:37.540 [2024-05-15 01:56:01.216810] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216824] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.216839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.540 [2024-05-15 01:56:01.216861] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.540 [2024-05-15 01:56:01.216947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.540 [2024-05-15 01:56:01.216960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.540 [2024-05-15 01:56:01.216967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.216974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.540 [2024-05-15 01:56:01.216985] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:37.540 [2024-05-15 01:56:01.217002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217010] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217017] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.217028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.540 [2024-05-15 01:56:01.217048] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.540 [2024-05-15 01:56:01.217139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.540 [2024-05-15 01:56:01.217153] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.540 [2024-05-15 01:56:01.217161] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.540 [2024-05-15 01:56:01.217179] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:37.540 [2024-05-15 01:56:01.217188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:37.540 [2024-05-15 01:56:01.217201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:37.540 [2024-05-15 01:56:01.217312] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:37.540 [2024-05-15 01:56:01.217323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:37.540 [2024-05-15 01:56:01.217339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217347] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.217364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.540 [2024-05-15 01:56:01.217386] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.540 [2024-05-15 01:56:01.217522] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.540 [2024-05-15 01:56:01.217535] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.540 [2024-05-15 01:56:01.217542] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217549] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.540 [2024-05-15 01:56:01.217559] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:37.540 [2024-05-15 01:56:01.217576] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.540 [2024-05-15 01:56:01.217595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.540 [2024-05-15 01:56:01.217607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.540 [2024-05-15 01:56:01.217628] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.541 [2024-05-15 01:56:01.217718] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.217733] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.217740] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.217747] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.217756] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:37.541 [2024-05-15 01:56:01.217765] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:37.541 [2024-05-15 01:56:01.217780] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:37.541 [2024-05-15 01:56:01.217795] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:37.541 [2024-05-15 01:56:01.217810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.217818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.217829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.541 [2024-05-15 01:56:01.217850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.541 [2024-05-15 01:56:01.218002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.541 [2024-05-15 01:56:01.218018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.541 [2024-05-15 01:56:01.218025] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218033] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c9120): datao=0, datal=4096, cccid=0 00:27:37.541 [2024-05-15 01:56:01.218041] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16221f0) on tqpair(0x15c9120): expected_datao=0, payload_size=4096 00:27:37.541 [2024-05-15 01:56:01.218050] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218062] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218072] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218104] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.218119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.218126] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218133] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.218146] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:37.541 [2024-05-15 01:56:01.218156] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:37.541 [2024-05-15 01:56:01.218164] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:37.541 [2024-05-15 01:56:01.218173] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:37.541 [2024-05-15 01:56:01.218181] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:37.541 [2024-05-15 01:56:01.218194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:37.541 [2024-05-15 01:56:01.218223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:37.541 [2024-05-15 01:56:01.218242] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218251] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218258] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:37.541 [2024-05-15 01:56:01.218291] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.541 [2024-05-15 01:56:01.218431] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.218446] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.218454] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218461] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16221f0) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.218480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.541 [2024-05-15 01:56:01.218516] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.541 [2024-05-15 01:56:01.218549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.541 [2024-05-15 01:56:01.218582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.541 [2024-05-15 01:56:01.218613] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:37.541 [2024-05-15 01:56:01.218629] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:37.541 [2024-05-15 01:56:01.218640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218648] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.541 [2024-05-15 01:56:01.218681] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16221f0, cid 0, qid 0 00:27:37.541 [2024-05-15 01:56:01.218696] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622350, cid 1, qid 0 00:27:37.541 [2024-05-15 01:56:01.218705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16224b0, cid 2, qid 0 00:27:37.541 [2024-05-15 01:56:01.218713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.541 [2024-05-15 01:56:01.218720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622770, cid 4, qid 0 00:27:37.541 [2024-05-15 01:56:01.218904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.218917] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.218924] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218931] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622770) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.218947] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:37.541 [2024-05-15 01:56:01.218957] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:37.541 [2024-05-15 01:56:01.218975] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.218985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.218996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.541 [2024-05-15 01:56:01.219017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622770, cid 4, qid 0 00:27:37.541 [2024-05-15 01:56:01.219150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.541 [2024-05-15 01:56:01.219165] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.541 [2024-05-15 01:56:01.219173] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.219179] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c9120): datao=0, datal=4096, cccid=4 00:27:37.541 [2024-05-15 01:56:01.219187] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1622770) on tqpair(0x15c9120): expected_datao=0, payload_size=4096 00:27:37.541 [2024-05-15 01:56:01.219195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.219212] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.219228] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260377] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.260395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.260403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260411] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622770) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.260433] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:37.541 [2024-05-15 01:56:01.260484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.260507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.541 [2024-05-15 01:56:01.260520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.260543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.541 [2024-05-15 01:56:01.260576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622770, cid 4, qid 0 00:27:37.541 [2024-05-15 01:56:01.260588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16228d0, cid 5, qid 0 00:27:37.541 [2024-05-15 01:56:01.260731] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.541 [2024-05-15 01:56:01.260746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.541 [2024-05-15 01:56:01.260753] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260760] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c9120): datao=0, datal=1024, cccid=4 00:27:37.541 [2024-05-15 01:56:01.260768] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1622770) on tqpair(0x15c9120): expected_datao=0, payload_size=1024 00:27:37.541 [2024-05-15 01:56:01.260775] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260786] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260794] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.260812] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.260819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.260826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16228d0) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.301334] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.301353] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.301361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301368] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622770) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.301396] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301407] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.301418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.541 [2024-05-15 01:56:01.301448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622770, cid 4, qid 0 00:27:37.541 [2024-05-15 01:56:01.301566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.541 [2024-05-15 01:56:01.301582] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.541 [2024-05-15 01:56:01.301589] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301596] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c9120): datao=0, datal=3072, cccid=4 00:27:37.541 [2024-05-15 01:56:01.301604] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1622770) on tqpair(0x15c9120): expected_datao=0, payload_size=3072 00:27:37.541 [2024-05-15 01:56:01.301612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301623] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301631] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.301678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.301685] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301693] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622770) on tqpair=0x15c9120 00:27:37.541 [2024-05-15 01:56:01.301709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15c9120) 00:27:37.541 [2024-05-15 01:56:01.301728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.541 [2024-05-15 01:56:01.301761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622770, cid 4, qid 0 00:27:37.541 [2024-05-15 01:56:01.301870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.541 [2024-05-15 01:56:01.301883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.541 [2024-05-15 01:56:01.301890] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301897] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15c9120): datao=0, datal=8, cccid=4 00:27:37.541 [2024-05-15 01:56:01.301905] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1622770) on tqpair(0x15c9120): expected_datao=0, payload_size=8 00:27:37.541 [2024-05-15 01:56:01.301913] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301923] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.301931] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.345233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.541 [2024-05-15 01:56:01.345252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.541 [2024-05-15 01:56:01.345274] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.541 [2024-05-15 01:56:01.345282] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622770) on tqpair=0x15c9120 00:27:37.541 ===================================================== 00:27:37.541 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:37.541 ===================================================== 00:27:37.541 Controller Capabilities/Features 00:27:37.541 ================================ 00:27:37.541 Vendor ID: 0000 00:27:37.542 Subsystem Vendor ID: 0000 00:27:37.542 Serial Number: .................... 00:27:37.542 Model Number: ........................................ 00:27:37.542 Firmware Version: 24.05 00:27:37.542 Recommended Arb Burst: 0 00:27:37.542 IEEE OUI Identifier: 00 00 00 00:27:37.542 Multi-path I/O 00:27:37.542 May have multiple subsystem ports: No 00:27:37.542 May have multiple controllers: No 00:27:37.542 Associated with SR-IOV VF: No 00:27:37.542 Max Data Transfer Size: 131072 00:27:37.542 Max Number of Namespaces: 0 00:27:37.542 Max Number of I/O Queues: 1024 00:27:37.542 NVMe Specification Version (VS): 1.3 00:27:37.542 NVMe Specification Version (Identify): 1.3 00:27:37.542 Maximum Queue Entries: 128 00:27:37.542 Contiguous Queues Required: Yes 00:27:37.542 Arbitration Mechanisms Supported 00:27:37.542 Weighted Round Robin: Not Supported 00:27:37.542 Vendor Specific: Not Supported 00:27:37.542 Reset Timeout: 15000 ms 00:27:37.542 Doorbell Stride: 4 bytes 00:27:37.542 NVM Subsystem Reset: Not Supported 00:27:37.542 Command Sets Supported 00:27:37.542 NVM Command Set: Supported 00:27:37.542 Boot Partition: Not Supported 00:27:37.542 Memory Page Size Minimum: 4096 bytes 00:27:37.542 Memory Page Size Maximum: 4096 bytes 00:27:37.542 Persistent Memory Region: Not Supported 00:27:37.542 Optional Asynchronous Events Supported 00:27:37.542 Namespace Attribute Notices: Not Supported 00:27:37.542 Firmware Activation Notices: Not Supported 00:27:37.542 ANA Change Notices: Not Supported 00:27:37.542 PLE Aggregate Log Change Notices: Not Supported 00:27:37.542 LBA Status Info Alert Notices: Not Supported 00:27:37.542 EGE Aggregate Log Change Notices: Not Supported 00:27:37.542 Normal NVM Subsystem Shutdown event: Not Supported 00:27:37.542 Zone Descriptor Change Notices: Not Supported 00:27:37.542 Discovery Log Change Notices: Supported 00:27:37.542 Controller Attributes 00:27:37.542 128-bit Host Identifier: Not Supported 00:27:37.542 Non-Operational Permissive Mode: Not Supported 00:27:37.542 NVM Sets: Not Supported 00:27:37.542 Read Recovery Levels: Not Supported 00:27:37.542 Endurance Groups: Not Supported 00:27:37.542 Predictable Latency Mode: Not Supported 00:27:37.542 Traffic Based Keep ALive: Not Supported 00:27:37.542 Namespace Granularity: Not Supported 00:27:37.542 SQ Associations: Not Supported 00:27:37.542 UUID List: Not Supported 00:27:37.542 Multi-Domain Subsystem: Not Supported 00:27:37.542 Fixed Capacity Management: Not Supported 00:27:37.542 Variable Capacity Management: Not Supported 00:27:37.542 Delete Endurance Group: Not Supported 00:27:37.542 Delete NVM Set: Not Supported 00:27:37.542 Extended LBA Formats Supported: Not Supported 00:27:37.542 Flexible Data Placement Supported: Not Supported 00:27:37.542 00:27:37.542 Controller Memory Buffer Support 00:27:37.542 ================================ 00:27:37.542 Supported: No 00:27:37.542 00:27:37.542 Persistent Memory Region Support 00:27:37.542 ================================ 00:27:37.542 Supported: No 00:27:37.542 00:27:37.542 Admin Command Set Attributes 00:27:37.542 ============================ 00:27:37.542 Security Send/Receive: Not Supported 00:27:37.542 Format NVM: Not Supported 00:27:37.542 Firmware Activate/Download: Not Supported 00:27:37.542 Namespace Management: Not Supported 00:27:37.542 Device Self-Test: Not Supported 00:27:37.542 Directives: Not Supported 00:27:37.542 NVMe-MI: Not Supported 00:27:37.542 Virtualization Management: Not Supported 00:27:37.542 Doorbell Buffer Config: Not Supported 00:27:37.542 Get LBA Status Capability: Not Supported 00:27:37.542 Command & Feature Lockdown Capability: Not Supported 00:27:37.542 Abort Command Limit: 1 00:27:37.542 Async Event Request Limit: 4 00:27:37.542 Number of Firmware Slots: N/A 00:27:37.542 Firmware Slot 1 Read-Only: N/A 00:27:37.542 Firmware Activation Without Reset: N/A 00:27:37.542 Multiple Update Detection Support: N/A 00:27:37.542 Firmware Update Granularity: No Information Provided 00:27:37.542 Per-Namespace SMART Log: No 00:27:37.542 Asymmetric Namespace Access Log Page: Not Supported 00:27:37.542 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:37.542 Command Effects Log Page: Not Supported 00:27:37.542 Get Log Page Extended Data: Supported 00:27:37.542 Telemetry Log Pages: Not Supported 00:27:37.542 Persistent Event Log Pages: Not Supported 00:27:37.542 Supported Log Pages Log Page: May Support 00:27:37.542 Commands Supported & Effects Log Page: Not Supported 00:27:37.542 Feature Identifiers & Effects Log Page:May Support 00:27:37.542 NVMe-MI Commands & Effects Log Page: May Support 00:27:37.542 Data Area 4 for Telemetry Log: Not Supported 00:27:37.542 Error Log Page Entries Supported: 128 00:27:37.542 Keep Alive: Not Supported 00:27:37.542 00:27:37.542 NVM Command Set Attributes 00:27:37.542 ========================== 00:27:37.542 Submission Queue Entry Size 00:27:37.542 Max: 1 00:27:37.542 Min: 1 00:27:37.542 Completion Queue Entry Size 00:27:37.542 Max: 1 00:27:37.542 Min: 1 00:27:37.542 Number of Namespaces: 0 00:27:37.542 Compare Command: Not Supported 00:27:37.542 Write Uncorrectable Command: Not Supported 00:27:37.542 Dataset Management Command: Not Supported 00:27:37.542 Write Zeroes Command: Not Supported 00:27:37.542 Set Features Save Field: Not Supported 00:27:37.542 Reservations: Not Supported 00:27:37.542 Timestamp: Not Supported 00:27:37.542 Copy: Not Supported 00:27:37.542 Volatile Write Cache: Not Present 00:27:37.542 Atomic Write Unit (Normal): 1 00:27:37.542 Atomic Write Unit (PFail): 1 00:27:37.542 Atomic Compare & Write Unit: 1 00:27:37.542 Fused Compare & Write: Supported 00:27:37.542 Scatter-Gather List 00:27:37.542 SGL Command Set: Supported 00:27:37.542 SGL Keyed: Supported 00:27:37.542 SGL Bit Bucket Descriptor: Not Supported 00:27:37.542 SGL Metadata Pointer: Not Supported 00:27:37.542 Oversized SGL: Not Supported 00:27:37.542 SGL Metadata Address: Not Supported 00:27:37.542 SGL Offset: Supported 00:27:37.542 Transport SGL Data Block: Not Supported 00:27:37.542 Replay Protected Memory Block: Not Supported 00:27:37.542 00:27:37.542 Firmware Slot Information 00:27:37.542 ========================= 00:27:37.542 Active slot: 0 00:27:37.542 00:27:37.542 00:27:37.542 Error Log 00:27:37.542 ========= 00:27:37.542 00:27:37.542 Active Namespaces 00:27:37.542 ================= 00:27:37.542 Discovery Log Page 00:27:37.542 ================== 00:27:37.542 Generation Counter: 2 00:27:37.542 Number of Records: 2 00:27:37.542 Record Format: 0 00:27:37.542 00:27:37.542 Discovery Log Entry 0 00:27:37.542 ---------------------- 00:27:37.542 Transport Type: 3 (TCP) 00:27:37.542 Address Family: 1 (IPv4) 00:27:37.542 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:37.542 Entry Flags: 00:27:37.542 Duplicate Returned Information: 1 00:27:37.542 Explicit Persistent Connection Support for Discovery: 1 00:27:37.542 Transport Requirements: 00:27:37.542 Secure Channel: Not Required 00:27:37.542 Port ID: 0 (0x0000) 00:27:37.542 Controller ID: 65535 (0xffff) 00:27:37.542 Admin Max SQ Size: 128 00:27:37.542 Transport Service Identifier: 4420 00:27:37.542 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:37.542 Transport Address: 10.0.0.2 00:27:37.542 Discovery Log Entry 1 00:27:37.542 ---------------------- 00:27:37.542 Transport Type: 3 (TCP) 00:27:37.542 Address Family: 1 (IPv4) 00:27:37.542 Subsystem Type: 2 (NVM Subsystem) 00:27:37.542 Entry Flags: 00:27:37.542 Duplicate Returned Information: 0 00:27:37.542 Explicit Persistent Connection Support for Discovery: 0 00:27:37.542 Transport Requirements: 00:27:37.542 Secure Channel: Not Required 00:27:37.542 Port ID: 0 (0x0000) 00:27:37.542 Controller ID: 65535 (0xffff) 00:27:37.542 Admin Max SQ Size: 128 00:27:37.542 Transport Service Identifier: 4420 00:27:37.542 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:37.542 Transport Address: 10.0.0.2 [2024-05-15 01:56:01.345392] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:37.542 [2024-05-15 01:56:01.345419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.542 [2024-05-15 01:56:01.345432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.542 [2024-05-15 01:56:01.345442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.542 [2024-05-15 01:56:01.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.542 [2024-05-15 01:56:01.345466] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345474] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.542 [2024-05-15 01:56:01.345492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.542 [2024-05-15 01:56:01.345522] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.542 [2024-05-15 01:56:01.345608] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.542 [2024-05-15 01:56:01.345620] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.542 [2024-05-15 01:56:01.345628] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.542 [2024-05-15 01:56:01.345649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345663] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.542 [2024-05-15 01:56:01.345674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.542 [2024-05-15 01:56:01.345699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.542 [2024-05-15 01:56:01.345798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.542 [2024-05-15 01:56:01.345811] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.542 [2024-05-15 01:56:01.345818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.542 [2024-05-15 01:56:01.345840] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:37.542 [2024-05-15 01:56:01.345849] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:37.542 [2024-05-15 01:56:01.345865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345874] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.345880] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.542 [2024-05-15 01:56:01.345891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.542 [2024-05-15 01:56:01.345912] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.542 [2024-05-15 01:56:01.346018] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.542 [2024-05-15 01:56:01.346033] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.542 [2024-05-15 01:56:01.346041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.346048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.542 [2024-05-15 01:56:01.346066] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.346076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.346083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.542 [2024-05-15 01:56:01.346093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.542 [2024-05-15 01:56:01.346114] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.542 [2024-05-15 01:56:01.346200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.542 [2024-05-15 01:56:01.346222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.542 [2024-05-15 01:56:01.346231] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.346238] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.542 [2024-05-15 01:56:01.346256] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.346266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.542 [2024-05-15 01:56:01.346272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.542 [2024-05-15 01:56:01.346283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.542 [2024-05-15 01:56:01.346304] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.542 [2024-05-15 01:56:01.346391] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.346406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.346413] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.346438] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346447] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346454] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.346464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.346485] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.346575] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.346591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.346599] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346606] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.346624] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346639] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.346650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.346671] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.346759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.346774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.346781] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346788] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.346805] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346815] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346822] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.346832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.346853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.346939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.346953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.346960] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.346985] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.346994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.347011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.347032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.347116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.347128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.347136] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347143] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.347160] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347175] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.347186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.347223] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.347301] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.347314] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.347325] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347333] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.347350] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347360] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347366] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.347377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.347398] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.347481] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.347493] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.347500] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.347524] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.347551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.347571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.347654] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.347666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.347674] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347681] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.347698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.347724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.347744] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.347822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.347834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.347841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.347865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347874] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.347881] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.347891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.347911] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.347997] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.348012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.348023] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348030] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.348049] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.348075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.348096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.348177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.348189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.348196] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348203] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.348228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348245] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.348256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.348276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.348361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.348373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.348380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.348405] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348413] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348420] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.348431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.348451] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.348556] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.348571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.348578] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.348603] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348612] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.348629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.348650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.348732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.348746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.348753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.348783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348792] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.543 [2024-05-15 01:56:01.348810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.543 [2024-05-15 01:56:01.348831] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.543 [2024-05-15 01:56:01.348919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.543 [2024-05-15 01:56:01.348934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.543 [2024-05-15 01:56:01.348941] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.543 [2024-05-15 01:56:01.348966] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.543 [2024-05-15 01:56:01.348975] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.348982] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.544 [2024-05-15 01:56:01.348993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.349013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.544 [2024-05-15 01:56:01.349098] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.349113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.349120] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.349127] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.544 [2024-05-15 01:56:01.349144] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.349153] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.349160] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.544 [2024-05-15 01:56:01.349171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.349191] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.544 [2024-05-15 01:56:01.353226] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.353243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.353250] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.353257] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.544 [2024-05-15 01:56:01.353291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.353302] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.353308] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15c9120) 00:27:37.544 [2024-05-15 01:56:01.353319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.353341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1622610, cid 3, qid 0 00:27:37.544 [2024-05-15 01:56:01.353463] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.353475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.353483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.353490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1622610) on tqpair=0x15c9120 00:27:37.544 [2024-05-15 01:56:01.353511] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:37.544 00:27:37.544 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:37.544 [2024-05-15 01:56:01.383433] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:37.544 [2024-05-15 01:56:01.383476] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149949 ] 00:27:37.544 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.544 [2024-05-15 01:56:01.415021] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:37.544 [2024-05-15 01:56:01.415061] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:37.544 [2024-05-15 01:56:01.415070] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:37.544 [2024-05-15 01:56:01.415082] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:37.544 [2024-05-15 01:56:01.415092] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:37.544 [2024-05-15 01:56:01.418270] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:37.544 [2024-05-15 01:56:01.418306] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1142120 0 00:27:37.544 [2024-05-15 01:56:01.425237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:37.544 [2024-05-15 01:56:01.425262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:37.544 [2024-05-15 01:56:01.425271] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:37.544 [2024-05-15 01:56:01.425277] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:37.544 [2024-05-15 01:56:01.425311] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.425321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.425328] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.425341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:37.544 [2024-05-15 01:56:01.425368] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.432227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.432245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.432252] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.432274] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:37.544 [2024-05-15 01:56:01.432284] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:37.544 [2024-05-15 01:56:01.432293] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:37.544 [2024-05-15 01:56:01.432312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.432342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.432366] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.432490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.432502] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.432509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.432526] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:37.544 [2024-05-15 01:56:01.432539] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:37.544 [2024-05-15 01:56:01.432551] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432558] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432565] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.432576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.432597] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.432680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.432692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.432699] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432706] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.432716] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:37.544 [2024-05-15 01:56:01.432729] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:37.544 [2024-05-15 01:56:01.432741] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432749] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.432766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.432787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.432876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.432891] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.432897] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432904] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.432914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:37.544 [2024-05-15 01:56:01.432931] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432940] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.432947] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.432957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.432979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.433063] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.433078] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.433085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433092] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.433101] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:37.544 [2024-05-15 01:56:01.433110] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:37.544 [2024-05-15 01:56:01.433123] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:37.544 [2024-05-15 01:56:01.433233] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:37.544 [2024-05-15 01:56:01.433242] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:37.544 [2024-05-15 01:56:01.433254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433268] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.433279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.433300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.433390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.433405] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.433412] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.433428] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:37.544 [2024-05-15 01:56:01.433445] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433454] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.433471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.433492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.433581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.544 [2024-05-15 01:56:01.433594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.544 [2024-05-15 01:56:01.433600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433607] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.544 [2024-05-15 01:56:01.433616] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:37.544 [2024-05-15 01:56:01.433624] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:37.544 [2024-05-15 01:56:01.433637] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:37.544 [2024-05-15 01:56:01.433651] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:37.544 [2024-05-15 01:56:01.433669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.544 [2024-05-15 01:56:01.433688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.544 [2024-05-15 01:56:01.433710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.544 [2024-05-15 01:56:01.433858] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.544 [2024-05-15 01:56:01.433873] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.544 [2024-05-15 01:56:01.433880] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433887] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=4096, cccid=0 00:27:37.544 [2024-05-15 01:56:01.433895] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b1f0) on tqpair(0x1142120): expected_datao=0, payload_size=4096 00:27:37.544 [2024-05-15 01:56:01.433902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433913] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.544 [2024-05-15 01:56:01.433921] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478226] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.805 [2024-05-15 01:56:01.478245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.805 [2024-05-15 01:56:01.478253] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.805 [2024-05-15 01:56:01.478277] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:37.805 [2024-05-15 01:56:01.478286] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:37.805 [2024-05-15 01:56:01.478293] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:37.805 [2024-05-15 01:56:01.478300] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:37.805 [2024-05-15 01:56:01.478311] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:37.805 [2024-05-15 01:56:01.478320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:37.805 [2024-05-15 01:56:01.478341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:37.805 [2024-05-15 01:56:01.478357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478366] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478373] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.805 [2024-05-15 01:56:01.478384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:37.805 [2024-05-15 01:56:01.478408] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.805 [2024-05-15 01:56:01.478501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.805 [2024-05-15 01:56:01.478516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.805 [2024-05-15 01:56:01.478523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478530] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b1f0) on tqpair=0x1142120 00:27:37.805 [2024-05-15 01:56:01.478547] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142120) 00:27:37.805 [2024-05-15 01:56:01.478576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.805 [2024-05-15 01:56:01.478588] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1142120) 00:27:37.805 [2024-05-15 01:56:01.478611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.805 [2024-05-15 01:56:01.478621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478628] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1142120) 00:27:37.805 [2024-05-15 01:56:01.478644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.805 [2024-05-15 01:56:01.478654] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478661] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478668] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.805 [2024-05-15 01:56:01.478677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.805 [2024-05-15 01:56:01.478686] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:37.805 [2024-05-15 01:56:01.478700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:37.805 [2024-05-15 01:56:01.478712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.805 [2024-05-15 01:56:01.478719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.805 [2024-05-15 01:56:01.478730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.805 [2024-05-15 01:56:01.478753] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b1f0, cid 0, qid 0 00:27:37.805 [2024-05-15 01:56:01.478764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b350, cid 1, qid 0 00:27:37.806 [2024-05-15 01:56:01.478772] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b4b0, cid 2, qid 0 00:27:37.806 [2024-05-15 01:56:01.478780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.806 [2024-05-15 01:56:01.478788] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.806 [2024-05-15 01:56:01.478899] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.806 [2024-05-15 01:56:01.478915] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.806 [2024-05-15 01:56:01.478922] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.478928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.806 [2024-05-15 01:56:01.478942] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:37.806 [2024-05-15 01:56:01.478952] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.478967] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.478978] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.478990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.806 [2024-05-15 01:56:01.479018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:37.806 [2024-05-15 01:56:01.479040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.806 [2024-05-15 01:56:01.479181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.806 [2024-05-15 01:56:01.479196] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.806 [2024-05-15 01:56:01.479211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.806 [2024-05-15 01:56:01.479285] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.479305] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.479320] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.806 [2024-05-15 01:56:01.479338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.806 [2024-05-15 01:56:01.479360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.806 [2024-05-15 01:56:01.479468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.806 [2024-05-15 01:56:01.479483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.806 [2024-05-15 01:56:01.479490] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479497] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=4096, cccid=4 00:27:37.806 [2024-05-15 01:56:01.479505] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b770) on tqpair(0x1142120): expected_datao=0, payload_size=4096 00:27:37.806 [2024-05-15 01:56:01.479512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479523] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479530] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.806 [2024-05-15 01:56:01.479577] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.806 [2024-05-15 01:56:01.479584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479590] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.806 [2024-05-15 01:56:01.479612] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:37.806 [2024-05-15 01:56:01.479633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.479650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.479664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479671] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.806 [2024-05-15 01:56:01.479682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.806 [2024-05-15 01:56:01.479703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.806 [2024-05-15 01:56:01.479812] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.806 [2024-05-15 01:56:01.479828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.806 [2024-05-15 01:56:01.479835] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479841] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=4096, cccid=4 00:27:37.806 [2024-05-15 01:56:01.479849] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b770) on tqpair(0x1142120): expected_datao=0, payload_size=4096 00:27:37.806 [2024-05-15 01:56:01.479856] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479867] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479874] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.806 [2024-05-15 01:56:01.479923] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.806 [2024-05-15 01:56:01.479930] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.806 [2024-05-15 01:56:01.479955] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.479972] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:37.806 [2024-05-15 01:56:01.479986] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.479993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.806 [2024-05-15 01:56:01.480004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.806 [2024-05-15 01:56:01.480025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.806 [2024-05-15 01:56:01.480133] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.806 [2024-05-15 01:56:01.480148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.806 [2024-05-15 01:56:01.480155] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.806 [2024-05-15 01:56:01.480161] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=4096, cccid=4 00:27:37.806 [2024-05-15 01:56:01.480169] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b770) on tqpair(0x1142120): expected_datao=0, payload_size=4096 00:27:37.806 [2024-05-15 01:56:01.480176] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480186] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480194] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.807 [2024-05-15 01:56:01.480250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.807 [2024-05-15 01:56:01.480257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480264] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.807 [2024-05-15 01:56:01.480286] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:37.807 [2024-05-15 01:56:01.480302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:37.807 [2024-05-15 01:56:01.480317] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:37.807 [2024-05-15 01:56:01.480328] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:37.807 [2024-05-15 01:56:01.480339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:37.807 [2024-05-15 01:56:01.480349] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:37.807 [2024-05-15 01:56:01.480357] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:37.807 [2024-05-15 01:56:01.480366] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:37.807 [2024-05-15 01:56:01.480387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.480408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.480419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480448] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.480457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.807 [2024-05-15 01:56:01.480482] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.807 [2024-05-15 01:56:01.480507] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b8d0, cid 5, qid 0 00:27:37.807 [2024-05-15 01:56:01.480645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.807 [2024-05-15 01:56:01.480657] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.807 [2024-05-15 01:56:01.480664] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480671] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.807 [2024-05-15 01:56:01.480683] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.807 [2024-05-15 01:56:01.480692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.807 [2024-05-15 01:56:01.480698] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480705] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b8d0) on tqpair=0x1142120 00:27:37.807 [2024-05-15 01:56:01.480722] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480730] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.480741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.480762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b8d0, cid 5, qid 0 00:27:37.807 [2024-05-15 01:56:01.480905] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.807 [2024-05-15 01:56:01.480920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.807 [2024-05-15 01:56:01.480927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b8d0) on tqpair=0x1142120 00:27:37.807 [2024-05-15 01:56:01.480951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.480959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.480970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.480991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b8d0, cid 5, qid 0 00:27:37.807 [2024-05-15 01:56:01.481081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.807 [2024-05-15 01:56:01.481100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.807 [2024-05-15 01:56:01.481107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481114] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b8d0) on tqpair=0x1142120 00:27:37.807 [2024-05-15 01:56:01.481132] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.481151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.481172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b8d0, cid 5, qid 0 00:27:37.807 [2024-05-15 01:56:01.481311] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.807 [2024-05-15 01:56:01.481325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.807 [2024-05-15 01:56:01.481332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b8d0) on tqpair=0x1142120 00:27:37.807 [2024-05-15 01:56:01.481358] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.481378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.481390] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.481406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.481418] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481425] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1142120) 00:27:37.807 [2024-05-15 01:56:01.481435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.807 [2024-05-15 01:56:01.481451] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.807 [2024-05-15 01:56:01.481459] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1142120) 00:27:37.808 [2024-05-15 01:56:01.481469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.808 [2024-05-15 01:56:01.481491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b8d0, cid 5, qid 0 00:27:37.808 [2024-05-15 01:56:01.481502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b770, cid 4, qid 0 00:27:37.808 [2024-05-15 01:56:01.481510] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119ba30, cid 6, qid 0 00:27:37.808 [2024-05-15 01:56:01.481518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119bb90, cid 7, qid 0 00:27:37.808 [2024-05-15 01:56:01.481711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.808 [2024-05-15 01:56:01.481724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.808 [2024-05-15 01:56:01.481731] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481737] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=8192, cccid=5 00:27:37.808 [2024-05-15 01:56:01.481745] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b8d0) on tqpair(0x1142120): expected_datao=0, payload_size=8192 00:27:37.808 [2024-05-15 01:56:01.481753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481777] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481787] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.808 [2024-05-15 01:56:01.481809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.808 [2024-05-15 01:56:01.481816] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481822] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=512, cccid=4 00:27:37.808 [2024-05-15 01:56:01.481830] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119b770) on tqpair(0x1142120): expected_datao=0, payload_size=512 00:27:37.808 [2024-05-15 01:56:01.481837] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481847] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481854] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481862] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.808 [2024-05-15 01:56:01.481871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.808 [2024-05-15 01:56:01.481878] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481884] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=512, cccid=6 00:27:37.808 [2024-05-15 01:56:01.481892] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119ba30) on tqpair(0x1142120): expected_datao=0, payload_size=512 00:27:37.808 [2024-05-15 01:56:01.481899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481908] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481915] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:37.808 [2024-05-15 01:56:01.481933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:37.808 [2024-05-15 01:56:01.481939] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481945] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142120): datao=0, datal=4096, cccid=7 00:27:37.808 [2024-05-15 01:56:01.481953] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x119bb90) on tqpair(0x1142120): expected_datao=0, payload_size=4096 00:27:37.808 [2024-05-15 01:56:01.481961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481970] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481977] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.481989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.808 [2024-05-15 01:56:01.481999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.808 [2024-05-15 01:56:01.482005] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.482012] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b8d0) on tqpair=0x1142120 00:27:37.808 [2024-05-15 01:56:01.482032] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.808 [2024-05-15 01:56:01.482043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.808 [2024-05-15 01:56:01.482049] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.482056] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b770) on tqpair=0x1142120 00:27:37.808 [2024-05-15 01:56:01.482071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.808 [2024-05-15 01:56:01.482081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.808 [2024-05-15 01:56:01.482088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.482095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119ba30) on tqpair=0x1142120 00:27:37.808 [2024-05-15 01:56:01.482109] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.808 [2024-05-15 01:56:01.482122] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.808 [2024-05-15 01:56:01.482129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.808 [2024-05-15 01:56:01.482136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119bb90) on tqpair=0x1142120 00:27:37.808 ===================================================== 00:27:37.808 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:37.808 ===================================================== 00:27:37.808 Controller Capabilities/Features 00:27:37.808 ================================ 00:27:37.808 Vendor ID: 8086 00:27:37.808 Subsystem Vendor ID: 8086 00:27:37.808 Serial Number: SPDK00000000000001 00:27:37.808 Model Number: SPDK bdev Controller 00:27:37.808 Firmware Version: 24.05 00:27:37.808 Recommended Arb Burst: 6 00:27:37.808 IEEE OUI Identifier: e4 d2 5c 00:27:37.808 Multi-path I/O 00:27:37.808 May have multiple subsystem ports: Yes 00:27:37.808 May have multiple controllers: Yes 00:27:37.808 Associated with SR-IOV VF: No 00:27:37.808 Max Data Transfer Size: 131072 00:27:37.808 Max Number of Namespaces: 32 00:27:37.808 Max Number of I/O Queues: 127 00:27:37.808 NVMe Specification Version (VS): 1.3 00:27:37.808 NVMe Specification Version (Identify): 1.3 00:27:37.808 Maximum Queue Entries: 128 00:27:37.808 Contiguous Queues Required: Yes 00:27:37.808 Arbitration Mechanisms Supported 00:27:37.808 Weighted Round Robin: Not Supported 00:27:37.808 Vendor Specific: Not Supported 00:27:37.808 Reset Timeout: 15000 ms 00:27:37.808 Doorbell Stride: 4 bytes 00:27:37.808 NVM Subsystem Reset: Not Supported 00:27:37.808 Command Sets Supported 00:27:37.808 NVM Command Set: Supported 00:27:37.808 Boot Partition: Not Supported 00:27:37.809 Memory Page Size Minimum: 4096 bytes 00:27:37.809 Memory Page Size Maximum: 4096 bytes 00:27:37.809 Persistent Memory Region: Not Supported 00:27:37.809 Optional Asynchronous Events Supported 00:27:37.809 Namespace Attribute Notices: Supported 00:27:37.809 Firmware Activation Notices: Not Supported 00:27:37.809 ANA Change Notices: Not Supported 00:27:37.809 PLE Aggregate Log Change Notices: Not Supported 00:27:37.809 LBA Status Info Alert Notices: Not Supported 00:27:37.809 EGE Aggregate Log Change Notices: Not Supported 00:27:37.809 Normal NVM Subsystem Shutdown event: Not Supported 00:27:37.809 Zone Descriptor Change Notices: Not Supported 00:27:37.809 Discovery Log Change Notices: Not Supported 00:27:37.809 Controller Attributes 00:27:37.809 128-bit Host Identifier: Supported 00:27:37.809 Non-Operational Permissive Mode: Not Supported 00:27:37.809 NVM Sets: Not Supported 00:27:37.809 Read Recovery Levels: Not Supported 00:27:37.809 Endurance Groups: Not Supported 00:27:37.809 Predictable Latency Mode: Not Supported 00:27:37.809 Traffic Based Keep ALive: Not Supported 00:27:37.809 Namespace Granularity: Not Supported 00:27:37.809 SQ Associations: Not Supported 00:27:37.809 UUID List: Not Supported 00:27:37.809 Multi-Domain Subsystem: Not Supported 00:27:37.809 Fixed Capacity Management: Not Supported 00:27:37.809 Variable Capacity Management: Not Supported 00:27:37.809 Delete Endurance Group: Not Supported 00:27:37.809 Delete NVM Set: Not Supported 00:27:37.809 Extended LBA Formats Supported: Not Supported 00:27:37.809 Flexible Data Placement Supported: Not Supported 00:27:37.809 00:27:37.809 Controller Memory Buffer Support 00:27:37.809 ================================ 00:27:37.809 Supported: No 00:27:37.809 00:27:37.809 Persistent Memory Region Support 00:27:37.809 ================================ 00:27:37.809 Supported: No 00:27:37.809 00:27:37.809 Admin Command Set Attributes 00:27:37.809 ============================ 00:27:37.809 Security Send/Receive: Not Supported 00:27:37.809 Format NVM: Not Supported 00:27:37.809 Firmware Activate/Download: Not Supported 00:27:37.809 Namespace Management: Not Supported 00:27:37.809 Device Self-Test: Not Supported 00:27:37.809 Directives: Not Supported 00:27:37.809 NVMe-MI: Not Supported 00:27:37.809 Virtualization Management: Not Supported 00:27:37.809 Doorbell Buffer Config: Not Supported 00:27:37.809 Get LBA Status Capability: Not Supported 00:27:37.809 Command & Feature Lockdown Capability: Not Supported 00:27:37.809 Abort Command Limit: 4 00:27:37.809 Async Event Request Limit: 4 00:27:37.809 Number of Firmware Slots: N/A 00:27:37.809 Firmware Slot 1 Read-Only: N/A 00:27:37.809 Firmware Activation Without Reset: N/A 00:27:37.809 Multiple Update Detection Support: N/A 00:27:37.809 Firmware Update Granularity: No Information Provided 00:27:37.809 Per-Namespace SMART Log: No 00:27:37.809 Asymmetric Namespace Access Log Page: Not Supported 00:27:37.809 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:37.809 Command Effects Log Page: Supported 00:27:37.809 Get Log Page Extended Data: Supported 00:27:37.809 Telemetry Log Pages: Not Supported 00:27:37.809 Persistent Event Log Pages: Not Supported 00:27:37.809 Supported Log Pages Log Page: May Support 00:27:37.809 Commands Supported & Effects Log Page: Not Supported 00:27:37.809 Feature Identifiers & Effects Log Page:May Support 00:27:37.809 NVMe-MI Commands & Effects Log Page: May Support 00:27:37.809 Data Area 4 for Telemetry Log: Not Supported 00:27:37.809 Error Log Page Entries Supported: 128 00:27:37.809 Keep Alive: Supported 00:27:37.809 Keep Alive Granularity: 10000 ms 00:27:37.809 00:27:37.809 NVM Command Set Attributes 00:27:37.809 ========================== 00:27:37.809 Submission Queue Entry Size 00:27:37.809 Max: 64 00:27:37.809 Min: 64 00:27:37.809 Completion Queue Entry Size 00:27:37.809 Max: 16 00:27:37.809 Min: 16 00:27:37.809 Number of Namespaces: 32 00:27:37.809 Compare Command: Supported 00:27:37.809 Write Uncorrectable Command: Not Supported 00:27:37.809 Dataset Management Command: Supported 00:27:37.809 Write Zeroes Command: Supported 00:27:37.809 Set Features Save Field: Not Supported 00:27:37.809 Reservations: Supported 00:27:37.809 Timestamp: Not Supported 00:27:37.809 Copy: Supported 00:27:37.809 Volatile Write Cache: Present 00:27:37.809 Atomic Write Unit (Normal): 1 00:27:37.809 Atomic Write Unit (PFail): 1 00:27:37.809 Atomic Compare & Write Unit: 1 00:27:37.809 Fused Compare & Write: Supported 00:27:37.809 Scatter-Gather List 00:27:37.809 SGL Command Set: Supported 00:27:37.809 SGL Keyed: Supported 00:27:37.809 SGL Bit Bucket Descriptor: Not Supported 00:27:37.809 SGL Metadata Pointer: Not Supported 00:27:37.809 Oversized SGL: Not Supported 00:27:37.809 SGL Metadata Address: Not Supported 00:27:37.809 SGL Offset: Supported 00:27:37.809 Transport SGL Data Block: Not Supported 00:27:37.809 Replay Protected Memory Block: Not Supported 00:27:37.809 00:27:37.809 Firmware Slot Information 00:27:37.809 ========================= 00:27:37.809 Active slot: 1 00:27:37.809 Slot 1 Firmware Revision: 24.05 00:27:37.809 00:27:37.809 00:27:37.809 Commands Supported and Effects 00:27:37.809 ============================== 00:27:37.809 Admin Commands 00:27:37.809 -------------- 00:27:37.809 Get Log Page (02h): Supported 00:27:37.809 Identify (06h): Supported 00:27:37.809 Abort (08h): Supported 00:27:37.809 Set Features (09h): Supported 00:27:37.810 Get Features (0Ah): Supported 00:27:37.810 Asynchronous Event Request (0Ch): Supported 00:27:37.810 Keep Alive (18h): Supported 00:27:37.810 I/O Commands 00:27:37.810 ------------ 00:27:37.810 Flush (00h): Supported LBA-Change 00:27:37.810 Write (01h): Supported LBA-Change 00:27:37.810 Read (02h): Supported 00:27:37.810 Compare (05h): Supported 00:27:37.810 Write Zeroes (08h): Supported LBA-Change 00:27:37.810 Dataset Management (09h): Supported LBA-Change 00:27:37.810 Copy (19h): Supported LBA-Change 00:27:37.810 Unknown (79h): Supported LBA-Change 00:27:37.810 Unknown (7Ah): Supported 00:27:37.810 00:27:37.810 Error Log 00:27:37.810 ========= 00:27:37.810 00:27:37.810 Arbitration 00:27:37.810 =========== 00:27:37.810 Arbitration Burst: 1 00:27:37.810 00:27:37.810 Power Management 00:27:37.810 ================ 00:27:37.810 Number of Power States: 1 00:27:37.810 Current Power State: Power State #0 00:27:37.810 Power State #0: 00:27:37.810 Max Power: 0.00 W 00:27:37.810 Non-Operational State: Operational 00:27:37.810 Entry Latency: Not Reported 00:27:37.810 Exit Latency: Not Reported 00:27:37.810 Relative Read Throughput: 0 00:27:37.810 Relative Read Latency: 0 00:27:37.810 Relative Write Throughput: 0 00:27:37.810 Relative Write Latency: 0 00:27:37.810 Idle Power: Not Reported 00:27:37.810 Active Power: Not Reported 00:27:37.810 Non-Operational Permissive Mode: Not Supported 00:27:37.810 00:27:37.810 Health Information 00:27:37.810 ================== 00:27:37.810 Critical Warnings: 00:27:37.810 Available Spare Space: OK 00:27:37.810 Temperature: OK 00:27:37.810 Device Reliability: OK 00:27:37.810 Read Only: No 00:27:37.810 Volatile Memory Backup: OK 00:27:37.810 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:37.810 Temperature Threshold: [2024-05-15 01:56:01.486276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486288] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1142120) 00:27:37.810 [2024-05-15 01:56:01.486299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.810 [2024-05-15 01:56:01.486323] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119bb90, cid 7, qid 0 00:27:37.810 [2024-05-15 01:56:01.486485] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.810 [2024-05-15 01:56:01.486498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.810 [2024-05-15 01:56:01.486505] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119bb90) on tqpair=0x1142120 00:27:37.810 [2024-05-15 01:56:01.486551] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:37.810 [2024-05-15 01:56:01.486572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.810 [2024-05-15 01:56:01.486584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.810 [2024-05-15 01:56:01.486594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.810 [2024-05-15 01:56:01.486604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.810 [2024-05-15 01:56:01.486617] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.810 [2024-05-15 01:56:01.486642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.810 [2024-05-15 01:56:01.486665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.810 [2024-05-15 01:56:01.486808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.810 [2024-05-15 01:56:01.486823] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.810 [2024-05-15 01:56:01.486831] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.810 [2024-05-15 01:56:01.486850] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486858] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.486864] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.810 [2024-05-15 01:56:01.486875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.810 [2024-05-15 01:56:01.486901] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.810 [2024-05-15 01:56:01.487012] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.810 [2024-05-15 01:56:01.487024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.810 [2024-05-15 01:56:01.487031] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.487038] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.810 [2024-05-15 01:56:01.487047] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:37.810 [2024-05-15 01:56:01.487059] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:37.810 [2024-05-15 01:56:01.487075] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.487084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.487091] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.810 [2024-05-15 01:56:01.487101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.810 [2024-05-15 01:56:01.487122] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.810 [2024-05-15 01:56:01.487208] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.810 [2024-05-15 01:56:01.487229] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.810 [2024-05-15 01:56:01.487237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.487244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.810 [2024-05-15 01:56:01.487262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.487271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.810 [2024-05-15 01:56:01.487278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.810 [2024-05-15 01:56:01.487288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.487309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.487415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.487430] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.487437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487444] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.487462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.487489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.487509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.487611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.487623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.487630] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.487654] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487663] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.487680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.487702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.487817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.487832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.487839] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.487868] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.487884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.487894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.487915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.488005] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.488020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.488027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.488052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.488078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.488099] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.488237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.488252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.488259] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.488284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488300] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.488310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.488332] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.488421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.488436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.488443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488450] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.488468] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488477] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.488494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.488515] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.488623] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.488638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.488645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488652] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.488674] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.488700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.488721] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.488808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.488822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.488829] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.811 [2024-05-15 01:56:01.488854] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.488870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.811 [2024-05-15 01:56:01.488880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.811 [2024-05-15 01:56:01.488902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.811 [2024-05-15 01:56:01.488989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.811 [2024-05-15 01:56:01.489001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.811 [2024-05-15 01:56:01.489007] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.811 [2024-05-15 01:56:01.489014] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.489032] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489041] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.489058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.489078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.489168] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.489183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.489190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.489222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.489250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.489272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.489370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.489385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.489392] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489399] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.489421] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.489448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.489469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.489554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.489569] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.489576] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489582] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.489600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489610] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.489627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.489648] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.489738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.489753] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.489760] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.489785] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.489811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.489832] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.489937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.489949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.489956] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.489980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.489996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.490006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.490027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.490117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.490131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.490138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.490145] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.490163] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.490176] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.490183] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.490193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.494221] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.494243] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.494253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.494260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.494267] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.812 [2024-05-15 01:56:01.494285] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.494295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:37.812 [2024-05-15 01:56:01.494301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142120) 00:27:37.812 [2024-05-15 01:56:01.494312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.812 [2024-05-15 01:56:01.494333] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x119b610, cid 3, qid 0 00:27:37.812 [2024-05-15 01:56:01.494436] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:37.812 [2024-05-15 01:56:01.494449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:37.812 [2024-05-15 01:56:01.494456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:37.813 [2024-05-15 01:56:01.494462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x119b610) on tqpair=0x1142120 00:27:37.813 [2024-05-15 01:56:01.494477] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:37.813 0 Kelvin (-273 Celsius) 00:27:37.813 Available Spare: 0% 00:27:37.813 Available Spare Threshold: 0% 00:27:37.813 Life Percentage Used: 0% 00:27:37.813 Data Units Read: 0 00:27:37.813 Data Units Written: 0 00:27:37.813 Host Read Commands: 0 00:27:37.813 Host Write Commands: 0 00:27:37.813 Controller Busy Time: 0 minutes 00:27:37.813 Power Cycles: 0 00:27:37.813 Power On Hours: 0 hours 00:27:37.813 Unsafe Shutdowns: 0 00:27:37.813 Unrecoverable Media Errors: 0 00:27:37.813 Lifetime Error Log Entries: 0 00:27:37.813 Warning Temperature Time: 0 minutes 00:27:37.813 Critical Temperature Time: 0 minutes 00:27:37.813 00:27:37.813 Number of Queues 00:27:37.813 ================ 00:27:37.813 Number of I/O Submission Queues: 127 00:27:37.813 Number of I/O Completion Queues: 127 00:27:37.813 00:27:37.813 Active Namespaces 00:27:37.813 ================= 00:27:37.813 Namespace ID:1 00:27:37.813 Error Recovery Timeout: Unlimited 00:27:37.813 Command Set Identifier: NVM (00h) 00:27:37.813 Deallocate: Supported 00:27:37.813 Deallocated/Unwritten Error: Not Supported 00:27:37.813 Deallocated Read Value: Unknown 00:27:37.813 Deallocate in Write Zeroes: Not Supported 00:27:37.813 Deallocated Guard Field: 0xFFFF 00:27:37.813 Flush: Supported 00:27:37.813 Reservation: Supported 00:27:37.813 Namespace Sharing Capabilities: Multiple Controllers 00:27:37.813 Size (in LBAs): 131072 (0GiB) 00:27:37.813 Capacity (in LBAs): 131072 (0GiB) 00:27:37.813 Utilization (in LBAs): 131072 (0GiB) 00:27:37.813 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:37.813 EUI64: ABCDEF0123456789 00:27:37.813 UUID: 23fd0972-947e-4c08-a22d-01998b16038a 00:27:37.813 Thin Provisioning: Not Supported 00:27:37.813 Per-NS Atomic Units: Yes 00:27:37.813 Atomic Boundary Size (Normal): 0 00:27:37.813 Atomic Boundary Size (PFail): 0 00:27:37.813 Atomic Boundary Offset: 0 00:27:37.813 Maximum Single Source Range Length: 65535 00:27:37.813 Maximum Copy Length: 65535 00:27:37.813 Maximum Source Range Count: 1 00:27:37.813 NGUID/EUI64 Never Reused: No 00:27:37.813 Namespace Write Protected: No 00:27:37.813 Number of LBA Formats: 1 00:27:37.813 Current LBA Format: LBA Format #00 00:27:37.813 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:37.813 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:37.813 rmmod nvme_tcp 00:27:37.813 rmmod nvme_fabrics 00:27:37.813 rmmod nvme_keyring 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4149801 ']' 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4149801 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 4149801 ']' 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 4149801 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4149801 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4149801' 00:27:37.813 killing process with pid 4149801 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 4149801 00:27:37.813 [2024-05-15 01:56:01.578797] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:37.813 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 4149801 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.071 01:56:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.971 01:56:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.971 00:27:39.971 real 0m5.815s 00:27:39.971 user 0m4.395s 00:27:39.971 sys 0m2.178s 00:27:39.971 01:56:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:39.971 01:56:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 ************************************ 00:27:39.971 END TEST nvmf_identify 00:27:39.971 ************************************ 00:27:39.971 01:56:03 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:39.971 01:56:03 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:39.971 01:56:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:39.971 01:56:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.229 ************************************ 00:27:40.229 START TEST nvmf_perf 00:27:40.229 ************************************ 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:40.229 * Looking for test storage... 00:27:40.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:40.229 01:56:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:42.755 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:42.755 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:42.755 Found net devices under 0000:09:00.0: cvl_0_0 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:42.755 Found net devices under 0000:09:00.1: cvl_0_1 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.755 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:27:42.756 00:27:42.756 --- 10.0.0.2 ping statistics --- 00:27:42.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.756 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:27:42.756 00:27:42.756 --- 10.0.0.1 ping statistics --- 00:27:42.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.756 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4152231 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4152231 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 4152231 ']' 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:42.756 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:42.756 [2024-05-15 01:56:06.520345] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:27:42.756 [2024-05-15 01:56:06.520420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.756 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.756 [2024-05-15 01:56:06.597098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.756 [2024-05-15 01:56:06.681503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.756 [2024-05-15 01:56:06.681579] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.756 [2024-05-15 01:56:06.681592] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.756 [2024-05-15 01:56:06.681603] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.756 [2024-05-15 01:56:06.681613] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.756 [2024-05-15 01:56:06.681696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.756 [2024-05-15 01:56:06.681762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.756 [2024-05-15 01:56:06.681828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.756 [2024-05-15 01:56:06.681830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:43.013 01:56:06 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:46.290 01:56:09 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:46.290 01:56:09 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:46.290 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:27:46.290 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:46.548 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:46.548 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:27:46.548 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:46.548 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:46.548 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:46.805 [2024-05-15 01:56:10.671180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.805 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.062 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:47.062 01:56:10 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.319 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:47.319 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:47.576 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.833 [2024-05-15 01:56:11.698653] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:47.833 [2024-05-15 01:56:11.698948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.833 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:48.091 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:27:48.091 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:48.091 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:48.091 01:56:11 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:49.459 Initializing NVMe Controllers 00:27:49.459 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:27:49.459 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:27:49.459 Initialization complete. Launching workers. 00:27:49.459 ======================================================== 00:27:49.459 Latency(us) 00:27:49.459 Device Information : IOPS MiB/s Average min max 00:27:49.459 PCIE (0000:0b:00.0) NSID 1 from core 0: 85291.78 333.17 374.57 11.07 4518.44 00:27:49.459 ======================================================== 00:27:49.459 Total : 85291.78 333.17 374.57 11.07 4518.44 00:27:49.459 00:27:49.459 01:56:13 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:49.459 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.867 Initializing NVMe Controllers 00:27:50.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:50.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:50.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:50.867 Initialization complete. Launching workers. 00:27:50.867 ======================================================== 00:27:50.867 Latency(us) 00:27:50.867 Device Information : IOPS MiB/s Average min max 00:27:50.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.00 0.45 9161.27 147.81 45924.58 00:27:50.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13216.85 4997.27 47903.70 00:27:50.867 ======================================================== 00:27:50.867 Total : 190.00 0.74 10783.51 147.81 47903.70 00:27:50.867 00:27:50.867 01:56:14 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.867 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.842 Initializing NVMe Controllers 00:27:51.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:51.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:51.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:51.842 Initialization complete. Launching workers. 00:27:51.842 ======================================================== 00:27:51.842 Latency(us) 00:27:51.842 Device Information : IOPS MiB/s Average min max 00:27:51.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8623.55 33.69 3711.50 544.10 7757.62 00:27:51.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.80 15.17 8283.46 6519.54 15773.62 00:27:51.842 ======================================================== 00:27:51.842 Total : 12507.35 48.86 5131.19 544.10 15773.62 00:27:51.842 00:27:51.842 01:56:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:51.842 01:56:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:51.842 01:56:15 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.842 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.365 Initializing NVMe Controllers 00:27:54.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.365 Controller IO queue size 128, less than required. 00:27:54.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:54.365 Controller IO queue size 128, less than required. 00:27:54.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:54.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:54.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:54.365 Initialization complete. Launching workers. 00:27:54.365 ======================================================== 00:27:54.365 Latency(us) 00:27:54.365 Device Information : IOPS MiB/s Average min max 00:27:54.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1713.57 428.39 76197.99 46069.33 114489.53 00:27:54.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 555.40 138.85 236295.20 90278.45 374455.97 00:27:54.365 ======================================================== 00:27:54.365 Total : 2268.97 567.24 115386.80 46069.33 374455.97 00:27:54.365 00:27:54.365 01:56:18 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:54.365 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.365 No valid NVMe controllers or AIO or URING devices found 00:27:54.365 Initializing NVMe Controllers 00:27:54.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.365 Controller IO queue size 128, less than required. 00:27:54.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:54.365 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:54.365 Controller IO queue size 128, less than required. 00:27:54.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:54.365 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:54.365 WARNING: Some requested NVMe devices were skipped 00:27:54.365 01:56:18 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:54.365 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.889 Initializing NVMe Controllers 00:27:56.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.889 Controller IO queue size 128, less than required. 00:27:56.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:56.889 Controller IO queue size 128, less than required. 00:27:56.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:56.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:56.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:56.889 Initialization complete. Launching workers. 00:27:56.889 00:27:56.889 ==================== 00:27:56.889 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:56.889 TCP transport: 00:27:56.889 polls: 11302 00:27:56.889 idle_polls: 7784 00:27:56.889 sock_completions: 3518 00:27:56.889 nvme_completions: 6265 00:27:56.889 submitted_requests: 9356 00:27:56.889 queued_requests: 1 00:27:56.889 00:27:56.889 ==================== 00:27:56.889 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:56.889 TCP transport: 00:27:56.889 polls: 11855 00:27:56.889 idle_polls: 8396 00:27:56.889 sock_completions: 3459 00:27:56.889 nvme_completions: 6191 00:27:56.889 submitted_requests: 9342 00:27:56.889 queued_requests: 1 00:27:56.889 ======================================================== 00:27:56.889 Latency(us) 00:27:56.889 Device Information : IOPS MiB/s Average min max 00:27:56.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1563.38 390.84 83998.91 57485.88 132056.82 00:27:56.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1544.91 386.23 83958.48 41426.70 121083.03 00:27:56.889 ======================================================== 00:27:56.889 Total : 3108.29 777.07 83978.81 41426.70 132056.82 00:27:56.889 00:27:56.889 01:56:20 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:56.889 01:56:20 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.147 01:56:21 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:57.147 01:56:21 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:27:57.147 01:56:21 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4231aa5f-26b1-4daa-8714-ae0f6e7778fc 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4231aa5f-26b1-4daa-8714-ae0f6e7778fc 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=4231aa5f-26b1-4daa-8714-ae0f6e7778fc 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:28:01.334 { 00:28:01.334 "uuid": "4231aa5f-26b1-4daa-8714-ae0f6e7778fc", 00:28:01.334 "name": "lvs_0", 00:28:01.334 "base_bdev": "Nvme0n1", 00:28:01.334 "total_data_clusters": 238234, 00:28:01.334 "free_clusters": 238234, 00:28:01.334 "block_size": 512, 00:28:01.334 "cluster_size": 4194304 00:28:01.334 } 00:28:01.334 ]' 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="4231aa5f-26b1-4daa-8714-ae0f6e7778fc") .free_clusters' 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=238234 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="4231aa5f-26b1-4daa-8714-ae0f6e7778fc") .cluster_size' 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=952936 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 952936 00:28:01.334 952936 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:01.334 01:56:24 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4231aa5f-26b1-4daa-8714-ae0f6e7778fc lbd_0 20480 00:28:01.334 01:56:25 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a0597094-427a-4e3c-a678-52732762a35b 00:28:01.334 01:56:25 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a0597094-427a-4e3c-a678-52732762a35b lvs_n_0 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=07bcf590-0f84-4455-9950-409bd8673549 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 07bcf590-0f84-4455-9950-409bd8673549 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=07bcf590-0f84-4455-9950-409bd8673549 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:28:02.266 01:56:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:02.523 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:28:02.523 { 00:28:02.523 "uuid": "4231aa5f-26b1-4daa-8714-ae0f6e7778fc", 00:28:02.523 "name": "lvs_0", 00:28:02.523 "base_bdev": "Nvme0n1", 00:28:02.524 "total_data_clusters": 238234, 00:28:02.524 "free_clusters": 233114, 00:28:02.524 "block_size": 512, 00:28:02.524 "cluster_size": 4194304 00:28:02.524 }, 00:28:02.524 { 00:28:02.524 "uuid": "07bcf590-0f84-4455-9950-409bd8673549", 00:28:02.524 "name": "lvs_n_0", 00:28:02.524 "base_bdev": "a0597094-427a-4e3c-a678-52732762a35b", 00:28:02.524 "total_data_clusters": 5114, 00:28:02.524 "free_clusters": 5114, 00:28:02.524 "block_size": 512, 00:28:02.524 "cluster_size": 4194304 00:28:02.524 } 00:28:02.524 ]' 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="07bcf590-0f84-4455-9950-409bd8673549") .free_clusters' 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=5114 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="07bcf590-0f84-4455-9950-409bd8673549") .cluster_size' 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=20456 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 20456 00:28:02.524 20456 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:02.524 01:56:26 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 07bcf590-0f84-4455-9950-409bd8673549 lbd_nest_0 20456 00:28:02.781 01:56:26 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=94699573-3d9b-433c-9fe7-3d1688bb6b61 00:28:02.781 01:56:26 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.039 01:56:26 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:03.039 01:56:26 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 94699573-3d9b-433c-9fe7-3d1688bb6b61 00:28:03.296 01:56:27 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.554 01:56:27 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:03.554 01:56:27 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:03.554 01:56:27 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:03.554 01:56:27 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:03.554 01:56:27 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:03.554 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.742 Initializing NVMe Controllers 00:28:15.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.742 Initialization complete. Launching workers. 00:28:15.742 ======================================================== 00:28:15.742 Latency(us) 00:28:15.742 Device Information : IOPS MiB/s Average min max 00:28:15.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.38 0.02 21561.19 182.25 46046.46 00:28:15.742 ======================================================== 00:28:15.742 Total : 46.38 0.02 21561.19 182.25 46046.46 00:28:15.742 00:28:15.742 01:56:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:15.742 01:56:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:15.742 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.710 Initializing NVMe Controllers 00:28:25.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:25.710 Initialization complete. Launching workers. 00:28:25.710 ======================================================== 00:28:25.710 Latency(us) 00:28:25.710 Device Information : IOPS MiB/s Average min max 00:28:25.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.80 9.47 13198.05 6041.09 47899.99 00:28:25.710 ======================================================== 00:28:25.710 Total : 75.80 9.47 13198.05 6041.09 47899.99 00:28:25.710 00:28:25.710 01:56:47 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:25.710 01:56:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:25.710 01:56:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.710 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.687 Initializing NVMe Controllers 00:28:35.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.687 Initialization complete. Launching workers. 00:28:35.687 ======================================================== 00:28:35.687 Latency(us) 00:28:35.687 Device Information : IOPS MiB/s Average min max 00:28:35.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7613.80 3.72 4204.40 308.48 10023.60 00:28:35.687 ======================================================== 00:28:35.687 Total : 7613.80 3.72 4204.40 308.48 10023.60 00:28:35.687 00:28:35.687 01:56:58 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:35.687 01:56:58 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:35.687 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.659 Initializing NVMe Controllers 00:28:45.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.659 Initialization complete. Launching workers. 00:28:45.659 ======================================================== 00:28:45.659 Latency(us) 00:28:45.659 Device Information : IOPS MiB/s Average min max 00:28:45.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3810.23 476.28 8399.36 743.59 18299.64 00:28:45.659 ======================================================== 00:28:45.659 Total : 3810.23 476.28 8399.36 743.59 18299.64 00:28:45.659 00:28:45.659 01:57:08 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:45.659 01:57:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:45.659 01:57:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.659 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.619 Initializing NVMe Controllers 00:28:55.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:55.619 Controller IO queue size 128, less than required. 00:28:55.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:55.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:55.619 Initialization complete. Launching workers. 00:28:55.619 ======================================================== 00:28:55.619 Latency(us) 00:28:55.619 Device Information : IOPS MiB/s Average min max 00:28:55.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11904.60 5.81 10753.95 1882.71 25432.33 00:28:55.619 ======================================================== 00:28:55.619 Total : 11904.60 5.81 10753.95 1882.71 25432.33 00:28:55.619 00:28:55.619 01:57:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:55.619 01:57:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:55.619 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.583 Initializing NVMe Controllers 00:29:05.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:05.583 Controller IO queue size 128, less than required. 00:29:05.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:05.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:05.583 Initialization complete. Launching workers. 00:29:05.583 ======================================================== 00:29:05.583 Latency(us) 00:29:05.583 Device Information : IOPS MiB/s Average min max 00:29:05.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1187.34 148.42 108373.39 16224.12 215718.97 00:29:05.583 ======================================================== 00:29:05.583 Total : 1187.34 148.42 108373.39 16224.12 215718.97 00:29:05.583 00:29:05.583 01:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.841 01:57:29 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 94699573-3d9b-433c-9fe7-3d1688bb6b61 00:29:06.406 01:57:30 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:06.663 01:57:30 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0597094-427a-4e3c-a678-52732762a35b 00:29:07.229 01:57:30 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.229 rmmod nvme_tcp 00:29:07.229 rmmod nvme_fabrics 00:29:07.229 rmmod nvme_keyring 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4152231 ']' 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4152231 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 4152231 ']' 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 4152231 00:29:07.229 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4152231 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4152231' 00:29:07.486 killing process with pid 4152231 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 4152231 00:29:07.486 [2024-05-15 01:57:31.183785] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:07.486 01:57:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 4152231 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.857 01:57:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.386 01:57:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.386 00:29:11.386 real 1m30.833s 00:29:11.386 user 5m31.242s 00:29:11.386 sys 0m16.159s 00:29:11.386 01:57:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:11.386 01:57:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:11.386 ************************************ 00:29:11.386 END TEST nvmf_perf 00:29:11.386 ************************************ 00:29:11.386 01:57:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:11.386 01:57:34 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:11.386 01:57:34 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:11.386 01:57:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.386 ************************************ 00:29:11.386 START TEST nvmf_fio_host 00:29:11.386 ************************************ 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:11.386 * Looking for test storage... 00:29:11.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.386 01:57:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:13.933 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:13.933 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:13.933 Found net devices under 0000:09:00.0: cvl_0_0 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:13.933 Found net devices under 0000:09:00.1: cvl_0_1 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:13.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:29:13.933 00:29:13.933 --- 10.0.0.2 ping statistics --- 00:29:13.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.933 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:13.933 00:29:13.933 --- 10.0.0.1 ping statistics --- 00:29:13.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.933 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=4165150 00:29:13.933 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 4165150 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 4165150 ']' 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.934 [2024-05-15 01:57:37.501804] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:29:13.934 [2024-05-15 01:57:37.501881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.934 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.934 [2024-05-15 01:57:37.575666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.934 [2024-05-15 01:57:37.658969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.934 [2024-05-15 01:57:37.659033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.934 [2024-05-15 01:57:37.659063] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.934 [2024-05-15 01:57:37.659075] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.934 [2024-05-15 01:57:37.659085] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.934 [2024-05-15 01:57:37.659165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.934 [2024-05-15 01:57:37.659239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.934 [2024-05-15 01:57:37.659297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.934 [2024-05-15 01:57:37.659300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.934 [2024-05-15 01:57:37.794925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.934 Malloc1 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.934 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.192 [2024-05-15 01:57:37.876326] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:14.192 [2024-05-15 01:57:37.876655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:14.192 01:57:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:14.192 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:14.192 fio-3.35 00:29:14.192 Starting 1 thread 00:29:14.449 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.975 00:29:16.975 test: (groupid=0, jobs=1): err= 0: pid=4165366: Wed May 15 01:57:40 2024 00:29:16.975 read: IOPS=8752, BW=34.2MiB/s (35.8MB/s)(68.6MiB/2006msec) 00:29:16.975 slat (nsec): min=1909, max=223631, avg=2823.42, stdev=2576.40 00:29:16.975 clat (usec): min=2776, max=13787, avg=7986.24, stdev=660.43 00:29:16.975 lat (usec): min=2809, max=13789, avg=7989.07, stdev=660.27 00:29:16.975 clat percentiles (usec): 00:29:16.975 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7504], 00:29:16.975 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:29:16.975 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8979], 00:29:16.975 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11994], 99.95th=[13566], 00:29:16.975 | 99.99th=[13829] 00:29:16.975 bw ( KiB/s): min=33808, max=35752, per=99.92%, avg=34980.00, stdev=828.00, samples=4 00:29:16.975 iops : min= 8452, max= 8938, avg=8745.00, stdev=207.00, samples=4 00:29:16.975 write: IOPS=8752, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2006msec); 0 zone resets 00:29:16.975 slat (usec): min=2, max=174, avg= 3.06, stdev= 1.76 00:29:16.975 clat (usec): min=2007, max=12226, avg=6547.96, stdev=550.11 00:29:16.975 lat (usec): min=2019, max=12229, avg=6551.02, stdev=550.05 00:29:16.975 clat percentiles (usec): 00:29:16.975 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:29:16.975 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:29:16.975 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:29:16.975 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[10552], 99.95th=[11469], 00:29:16.975 | 99.99th=[12125] 00:29:16.975 bw ( KiB/s): min=34800, max=35352, per=99.96%, avg=34996.00, stdev=254.87, samples=4 00:29:16.975 iops : min= 8700, max= 8838, avg=8749.00, stdev=63.72, samples=4 00:29:16.975 lat (msec) : 4=0.11%, 10=99.69%, 20=0.20% 00:29:16.975 cpu : usr=62.74%, sys=34.61%, ctx=34, majf=0, minf=34 00:29:16.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:16.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:16.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:16.975 issued rwts: total=17557,17558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:16.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:16.975 00:29:16.975 Run status group 0 (all jobs): 00:29:16.975 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=68.6MiB (71.9MB), run=2006-2006msec 00:29:16.975 WRITE: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (71.9MB), run=2006-2006msec 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:16.975 01:57:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:16.975 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:16.975 fio-3.35 00:29:16.975 Starting 1 thread 00:29:16.975 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.513 00:29:19.513 test: (groupid=0, jobs=1): err= 0: pid=4165695: Wed May 15 01:57:43 2024 00:29:19.513 read: IOPS=8378, BW=131MiB/s (137MB/s)(263MiB/2006msec) 00:29:19.513 slat (nsec): min=2837, max=94066, avg=3526.84, stdev=1578.07 00:29:19.513 clat (usec): min=2649, max=19201, avg=8819.11, stdev=2211.01 00:29:19.513 lat (usec): min=2653, max=19205, avg=8822.64, stdev=2211.12 00:29:19.513 clat percentiles (usec): 00:29:19.513 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 6980], 00:29:19.513 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:29:19.513 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11731], 95.00th=[12911], 00:29:19.513 | 99.00th=[14746], 99.50th=[16188], 99.90th=[18482], 99.95th=[18744], 00:29:19.513 | 99.99th=[19268] 00:29:19.513 bw ( KiB/s): min=57696, max=79104, per=52.23%, avg=70024.00, stdev=10741.88, samples=4 00:29:19.513 iops : min= 3606, max= 4944, avg=4376.50, stdev=671.37, samples=4 00:29:19.513 write: IOPS=5097, BW=79.6MiB/s (83.5MB/s)(144MiB/1804msec); 0 zone resets 00:29:19.513 slat (usec): min=30, max=215, avg=33.54, stdev= 5.28 00:29:19.513 clat (usec): min=5662, max=18667, avg=11200.11, stdev=1927.49 00:29:19.513 lat (usec): min=5695, max=18706, avg=11233.65, stdev=1927.53 00:29:19.513 clat percentiles (usec): 00:29:19.513 | 1.00th=[ 7308], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9503], 00:29:19.513 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:29:19.513 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13698], 95.00th=[14615], 00:29:19.513 | 99.00th=[16188], 99.50th=[16909], 99.90th=[18220], 99.95th=[18220], 00:29:19.513 | 99.99th=[18744] 00:29:19.513 bw ( KiB/s): min=61344, max=82208, per=89.62%, avg=73088.00, stdev=10202.83, samples=4 00:29:19.513 iops : min= 3834, max= 5138, avg=4568.00, stdev=637.68, samples=4 00:29:19.513 lat (msec) : 4=0.16%, 10=56.96%, 20=42.88% 00:29:19.513 cpu : usr=75.76%, sys=22.49%, ctx=43, majf=0, minf=60 00:29:19.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:19.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:19.513 issued rwts: total=16808,9195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:19.513 00:29:19.513 Run status group 0 (all jobs): 00:29:19.513 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (275MB), run=2006-2006msec 00:29:19.513 WRITE: bw=79.6MiB/s (83.5MB/s), 79.6MiB/s-79.6MiB/s (83.5MB/s-83.5MB/s), io=144MiB (151MB), run=1804-1804msec 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=() 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # local bdfs 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.513 01:57:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.035 Nvme0n1 00:29:22.035 01:57:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.035 01:57:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:22.035 01:57:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.035 01:57:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=c7483675-aeef-42e0-804c-b8fb9c89753b 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb c7483675-aeef-42e0-804c-b8fb9c89753b 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=c7483675-aeef-42e0-804c-b8fb9c89753b 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:29:25.311 { 00:29:25.311 "uuid": "c7483675-aeef-42e0-804c-b8fb9c89753b", 00:29:25.311 "name": "lvs_0", 00:29:25.311 "base_bdev": "Nvme0n1", 00:29:25.311 "total_data_clusters": 930, 00:29:25.311 "free_clusters": 930, 00:29:25.311 "block_size": 512, 00:29:25.311 "cluster_size": 1073741824 00:29:25.311 } 00:29:25.311 ]' 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c7483675-aeef-42e0-804c-b8fb9c89753b") .free_clusters' 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=930 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="c7483675-aeef-42e0-804c-b8fb9c89753b") .cluster_size' 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=1073741824 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=952320 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 952320 00:29:25.311 952320 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.311 616c30b0-d108-417f-9c80-d699281c8c3f 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:25.311 01:57:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:25.311 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:25.311 fio-3.35 00:29:25.311 Starting 1 thread 00:29:25.312 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.838 00:29:27.838 test: (groupid=0, jobs=1): err= 0: pid=4166715: Wed May 15 01:57:51 2024 00:29:27.838 read: IOPS=5294, BW=20.7MiB/s (21.7MB/s)(41.5MiB/2008msec) 00:29:27.838 slat (nsec): min=1998, max=154172, avg=2596.70, stdev=2161.95 00:29:27.838 clat (usec): min=1171, max=173314, avg=13126.88, stdev=12358.24 00:29:27.838 lat (usec): min=1174, max=173363, avg=13129.48, stdev=12358.56 00:29:27.838 clat percentiles (msec): 00:29:27.838 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:29:27.838 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:29:27.838 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 15], 00:29:27.838 | 99.00th=[ 17], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:29:27.838 | 99.99th=[ 174] 00:29:27.838 bw ( KiB/s): min=14888, max=23304, per=99.69%, avg=21114.00, stdev=4153.22, samples=4 00:29:27.838 iops : min= 3722, max= 5826, avg=5278.50, stdev=1038.30, samples=4 00:29:27.838 write: IOPS=5283, BW=20.6MiB/s (21.6MB/s)(41.4MiB/2008msec); 0 zone resets 00:29:27.838 slat (usec): min=2, max=104, avg= 2.74, stdev= 1.41 00:29:27.838 clat (usec): min=441, max=170717, avg=10899.46, stdev=11569.34 00:29:27.838 lat (usec): min=444, max=170723, avg=10902.21, stdev=11569.60 00:29:27.838 clat percentiles (msec): 00:29:27.838 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:29:27.838 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:29:27.838 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:29:27.838 | 99.00th=[ 15], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 171], 00:29:27.838 | 99.99th=[ 171] 00:29:27.838 bw ( KiB/s): min=15592, max=23224, per=99.91%, avg=21114.00, stdev=3686.33, samples=4 00:29:27.838 iops : min= 3898, max= 5806, avg=5278.50, stdev=921.58, samples=4 00:29:27.838 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:27.838 lat (msec) : 2=0.03%, 4=0.08%, 10=26.18%, 20=73.09%, 250=0.60% 00:29:27.838 cpu : usr=59.44%, sys=38.71%, ctx=99, majf=0, minf=34 00:29:27.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:29:27.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:27.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:27.838 issued rwts: total=10632,10609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:27.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:27.838 00:29:27.838 Run status group 0 (all jobs): 00:29:27.838 READ: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.5MiB (43.5MB), run=2008-2008msec 00:29:27.838 WRITE: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=41.4MiB (43.5MB), run=2008-2008msec 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.838 01:57:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=6e0a8655-1f44-4e92-8068-0b21a3bf307d 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 6e0a8655-1f44-4e92-8068-0b21a3bf307d 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=6e0a8655-1f44-4e92-8068-0b21a3bf307d 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.768 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:29:28.768 { 00:29:28.768 "uuid": "c7483675-aeef-42e0-804c-b8fb9c89753b", 00:29:28.768 "name": "lvs_0", 00:29:28.768 "base_bdev": "Nvme0n1", 00:29:28.768 "total_data_clusters": 930, 00:29:28.768 "free_clusters": 0, 00:29:28.768 "block_size": 512, 00:29:28.768 "cluster_size": 1073741824 00:29:28.768 }, 00:29:28.768 { 00:29:28.768 "uuid": "6e0a8655-1f44-4e92-8068-0b21a3bf307d", 00:29:28.768 "name": "lvs_n_0", 00:29:28.768 "base_bdev": "616c30b0-d108-417f-9c80-d699281c8c3f", 00:29:28.768 "total_data_clusters": 237847, 00:29:28.769 "free_clusters": 237847, 00:29:28.769 "block_size": 512, 00:29:28.769 "cluster_size": 4194304 00:29:28.769 } 00:29:28.769 ]' 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="6e0a8655-1f44-4e92-8068-0b21a3bf307d") .free_clusters' 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=237847 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="6e0a8655-1f44-4e92-8068-0b21a3bf307d") .cluster_size' 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=4194304 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=951388 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 951388 00:29:28.769 951388 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.769 01:57:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.333 97aae89e-71ac-48a9-83fa-ff5e281e7859 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:29.333 01:57:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:29.333 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:29.333 fio-3.35 00:29:29.333 Starting 1 thread 00:29:29.590 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.114 00:29:32.114 test: (groupid=0, jobs=1): err= 0: pid=4167306: Wed May 15 01:57:55 2024 00:29:32.114 read: IOPS=5702, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec) 00:29:32.114 slat (nsec): min=1892, max=163106, avg=2660.39, stdev=2375.66 00:29:32.114 clat (usec): min=4573, max=18727, avg=12262.93, stdev=1095.92 00:29:32.114 lat (usec): min=4588, max=18730, avg=12265.59, stdev=1095.81 00:29:32.114 clat percentiles (usec): 00:29:32.114 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:29:32.114 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:29:32.114 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:29:32.114 | 99.00th=[14615], 99.50th=[14877], 99.90th=[16057], 99.95th=[18482], 00:29:32.114 | 99.99th=[18744] 00:29:32.114 bw ( KiB/s): min=21520, max=23368, per=99.94%, avg=22798.00, stdev=865.22, samples=4 00:29:32.114 iops : min= 5380, max= 5842, avg=5699.50, stdev=216.30, samples=4 00:29:32.114 write: IOPS=5682, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2009msec); 0 zone resets 00:29:32.114 slat (usec): min=2, max=135, avg= 2.77, stdev= 1.80 00:29:32.114 clat (usec): min=2259, max=18357, avg=10017.13, stdev=955.09 00:29:32.114 lat (usec): min=2266, max=18359, avg=10019.90, stdev=955.04 00:29:32.114 clat percentiles (usec): 00:29:32.114 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:29:32.114 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:29:32.114 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:29:32.114 | 99.00th=[11994], 99.50th=[12649], 99.90th=[17433], 99.95th=[18220], 00:29:32.114 | 99.99th=[18220] 00:29:32.114 bw ( KiB/s): min=22440, max=22848, per=99.84%, avg=22694.00, stdev=177.37, samples=4 00:29:32.114 iops : min= 5610, max= 5712, avg=5673.50, stdev=44.34, samples=4 00:29:32.114 lat (msec) : 4=0.05%, 10=25.61%, 20=74.34% 00:29:32.114 cpu : usr=57.19%, sys=40.62%, ctx=116, majf=0, minf=34 00:29:32.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:32.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:32.114 issued rwts: total=11457,11416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:32.114 00:29:32.114 Run status group 0 (all jobs): 00:29:32.114 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (46.9MB), run=2009-2009msec 00:29:32.114 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.8MB), run=2009-2009msec 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.114 01:57:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:35.391 01:57:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.735 01:58:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.672 rmmod nvme_tcp 00:29:39.672 rmmod nvme_fabrics 00:29:39.672 rmmod nvme_keyring 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4165150 ']' 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4165150 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 4165150 ']' 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 4165150 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4165150 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4165150' 00:29:39.672 killing process with pid 4165150 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 4165150 00:29:39.672 [2024-05-15 01:58:03.580307] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:39.672 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 4165150 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.931 01:58:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.461 01:58:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:42.461 00:29:42.461 real 0m31.041s 00:29:42.461 user 1m49.600s 00:29:42.461 sys 0m6.939s 00:29:42.461 01:58:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:42.461 01:58:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 ************************************ 00:29:42.461 END TEST nvmf_fio_host 00:29:42.461 ************************************ 00:29:42.461 01:58:05 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:42.461 01:58:05 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:42.461 01:58:05 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:42.461 01:58:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 ************************************ 00:29:42.461 START TEST nvmf_failover 00:29:42.461 ************************************ 00:29:42.461 01:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:42.461 * Looking for test storage... 00:29:42.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.461 01:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:42.462 01:58:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:44.360 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:44.360 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:44.360 Found net devices under 0000:09:00.0: cvl_0_0 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:44.360 Found net devices under 0000:09:00.1: cvl_0_1 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:44.360 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.361 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:44.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:29:44.618 00:29:44.618 --- 10.0.0.2 ping statistics --- 00:29:44.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.618 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:44.618 00:29:44.618 --- 10.0.0.1 ping statistics --- 00:29:44.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.618 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4170695 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4170695 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 4170695 ']' 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:44.618 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.618 [2024-05-15 01:58:08.435750] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:29:44.618 [2024-05-15 01:58:08.435838] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.618 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.618 [2024-05-15 01:58:08.510118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:44.875 [2024-05-15 01:58:08.591735] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.875 [2024-05-15 01:58:08.591829] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.875 [2024-05-15 01:58:08.591851] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.875 [2024-05-15 01:58:08.591862] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.875 [2024-05-15 01:58:08.591887] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.875 [2024-05-15 01:58:08.593236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.875 [2024-05-15 01:58:08.593306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.875 [2024-05-15 01:58:08.593310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.875 01:58:08 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:45.131 [2024-05-15 01:58:08.986605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.131 01:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:45.387 Malloc0 00:29:45.387 01:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:45.643 01:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:45.900 01:58:09 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.156 [2024-05-15 01:58:10.078925] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:46.156 [2024-05-15 01:58:10.079347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.413 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:46.413 [2024-05-15 01:58:10.327910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:46.669 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:46.927 [2024-05-15 01:58:10.625012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4170989 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4170989 /var/tmp/bdevperf.sock 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 4170989 ']' 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:46.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:46.927 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:47.184 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:47.184 01:58:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:29:47.184 01:58:10 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:47.748 NVMe0n1 00:29:47.748 01:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:48.005 00:29:48.005 01:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4171126 00:29:48.005 01:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:48.005 01:58:11 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:48.938 01:58:12 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.195 [2024-05-15 01:58:13.061089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.195 [2024-05-15 01:58:13.061158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.195 [2024-05-15 01:58:13.061174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.195 [2024-05-15 01:58:13.061186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.195 [2024-05-15 01:58:13.061199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.061999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.196 [2024-05-15 01:58:13.062298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 [2024-05-15 01:58:13.062310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 [2024-05-15 01:58:13.062323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 [2024-05-15 01:58:13.062335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 [2024-05-15 01:58:13.062347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 [2024-05-15 01:58:13.062359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 [2024-05-15 01:58:13.062372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78bf50 is same with the state(5) to be set 00:29:49.197 01:58:13 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:52.500 01:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:52.757 00:29:52.757 01:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:53.016 [2024-05-15 01:58:16.816630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 [2024-05-15 01:58:16.816926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78ce00 is same with the state(5) to be set 00:29:53.016 01:58:16 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:56.293 01:58:19 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.293 [2024-05-15 01:58:20.100934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.293 01:58:20 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:57.225 01:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:57.483 [2024-05-15 01:58:21.369244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 [2024-05-15 01:58:21.369688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78e340 is same with the state(5) to be set 00:29:57.483 01:58:21 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4171126 00:30:04.106 0 00:30:04.106 01:58:26 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4170989 00:30:04.106 01:58:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 4170989 ']' 00:30:04.106 01:58:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 4170989 00:30:04.106 01:58:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:30:04.106 01:58:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:04.106 01:58:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4170989 00:30:04.106 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:04.106 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:04.106 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4170989' 00:30:04.106 killing process with pid 4170989 00:30:04.106 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 4170989 00:30:04.106 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 4170989 00:30:04.106 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.106 [2024-05-15 01:58:10.687949] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:04.106 [2024-05-15 01:58:10.688024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170989 ] 00:30:04.106 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.106 [2024-05-15 01:58:10.755453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.106 [2024-05-15 01:58:10.836609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.106 Running I/O for 15 seconds... 00:30:04.106 [2024-05-15 01:58:13.063610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.106 [2024-05-15 01:58:13.063946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.106 [2024-05-15 01:58:13.063960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.063982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.063996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.107 [2024-05-15 01:58:13.064624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.107 [2024-05-15 01:58:13.064652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.064977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.064991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.065006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.065020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.065035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.065049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.107 [2024-05-15 01:58:13.065065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.107 [2024-05-15 01:58:13.065079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.108 [2024-05-15 01:58:13.065598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.065980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.065995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.108 [2024-05-15 01:58:13.066311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.108 [2024-05-15 01:58:13.066325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.066971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.066986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.067000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.109 [2024-05-15 01:58:13.067028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.109 [2024-05-15 01:58:13.067078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:30:04.109 [2024-05-15 01:58:13.067091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.109 [2024-05-15 01:58:13.067123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.109 [2024-05-15 01:58:13.067135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:30:04.109 [2024-05-15 01:58:13.067148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.109 [2024-05-15 01:58:13.067173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.109 [2024-05-15 01:58:13.067184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:30:04.109 [2024-05-15 01:58:13.067197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.109 [2024-05-15 01:58:13.067228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.109 [2024-05-15 01:58:13.067240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:30:04.109 [2024-05-15 01:58:13.067253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.109 [2024-05-15 01:58:13.067277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.109 [2024-05-15 01:58:13.067288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:30:04.109 [2024-05-15 01:58:13.067301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.109 [2024-05-15 01:58:13.067325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.109 [2024-05-15 01:58:13.067336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:30:04.109 [2024-05-15 01:58:13.067348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.109 [2024-05-15 01:58:13.067361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.109 [2024-05-15 01:58:13.067372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.110 [2024-05-15 01:58:13.067809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.110 [2024-05-15 01:58:13.067823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:30:04.110 [2024-05-15 01:58:13.067837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067902] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200ddb0 was disconnected and freed. reset controller. 00:30:04.110 [2024-05-15 01:58:13.067928] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:04.110 [2024-05-15 01:58:13.067964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.110 [2024-05-15 01:58:13.067983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.067998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.110 [2024-05-15 01:58:13.068012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.068035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.110 [2024-05-15 01:58:13.068058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.068075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.110 [2024-05-15 01:58:13.068088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:13.068101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.110 [2024-05-15 01:58:13.071405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.110 [2024-05-15 01:58:13.071442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fee600 (9): Bad file descriptor 00:30:04.110 [2024-05-15 01:58:13.234877] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.110 [2024-05-15 01:58:16.817585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.817980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.817994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.818008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.818021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.818036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.818050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.818064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.818077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.818092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.110 [2024-05-15 01:58:16.818105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.110 [2024-05-15 01:58:16.818120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.111 [2024-05-15 01:58:16.818810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.111 [2024-05-15 01:58:16.818839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.111 [2024-05-15 01:58:16.818867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.111 [2024-05-15 01:58:16.818895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.111 [2024-05-15 01:58:16.818923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.111 [2024-05-15 01:58:16.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.111 [2024-05-15 01:58:16.818970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.111 [2024-05-15 01:58:16.818983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.818998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.819975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.819990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.112 [2024-05-15 01:58:16.820198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.112 [2024-05-15 01:58:16.820211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.113 [2024-05-15 01:58:16.820247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.113 [2024-05-15 01:58:16.820282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108464 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108472 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108480 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108488 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108496 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108504 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108512 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108520 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108536 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108544 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108552 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108560 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.820957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.820971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.820982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.820993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108568 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108576 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108584 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108592 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108600 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108608 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108616 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108624 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108632 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.113 [2024-05-15 01:58:16.821420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.113 [2024-05-15 01:58:16.821431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.113 [2024-05-15 01:58:16.821442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108640 len:8 PRP1 0x0 PRP2 0x0 00:30:04.113 [2024-05-15 01:58:16.821455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108648 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108656 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108664 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108672 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108680 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108712 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108720 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.821960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.821971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.821982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108728 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.821994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108736 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108744 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108752 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108760 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108768 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108064 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.114 [2024-05-15 01:58:16.822324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.114 [2024-05-15 01:58:16.822335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108072 len:8 PRP1 0x0 PRP2 0x0 00:30:04.114 [2024-05-15 01:58:16.822348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822406] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200feb0 was disconnected and freed. reset controller. 00:30:04.114 [2024-05-15 01:58:16.822425] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:04.114 [2024-05-15 01:58:16.822459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.114 [2024-05-15 01:58:16.822476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.114 [2024-05-15 01:58:16.822505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.114 [2024-05-15 01:58:16.822536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.114 [2024-05-15 01:58:16.822563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:16.822576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.114 [2024-05-15 01:58:16.822630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fee600 (9): Bad file descriptor 00:30:04.114 [2024-05-15 01:58:16.825897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.114 [2024-05-15 01:58:16.902712] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.114 [2024-05-15 01:58:21.371314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.114 [2024-05-15 01:58:21.371361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:21.371391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.114 [2024-05-15 01:58:21.371407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:21.371425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.114 [2024-05-15 01:58:21.371439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.114 [2024-05-15 01:58:21.371454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.371975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.371989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.115 [2024-05-15 01:58:21.372352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.115 [2024-05-15 01:58:21.372382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.115 [2024-05-15 01:58:21.372421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.115 [2024-05-15 01:58:21.372450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.115 [2024-05-15 01:58:21.372466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.115 [2024-05-15 01:58:21.372480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.372977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.372992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.116 [2024-05-15 01:58:21.373698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.116 [2024-05-15 01:58:21.373713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.117 [2024-05-15 01:58:21.373727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.373743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.117 [2024-05-15 01:58:21.373757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.373772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.117 [2024-05-15 01:58:21.373786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.373801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.117 [2024-05-15 01:58:21.373815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.373850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.373868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39648 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.373883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.373901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.373914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.373925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39656 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.373938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.373951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.373963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39664 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.373992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39672 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39688 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39696 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39704 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39712 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39720 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39728 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39736 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39744 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39752 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39760 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39768 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39776 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39784 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39792 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39800 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39808 len:8 PRP1 0x0 PRP2 0x0 00:30:04.117 [2024-05-15 01:58:21.374889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.117 [2024-05-15 01:58:21.374902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.117 [2024-05-15 01:58:21.374913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.117 [2024-05-15 01:58:21.374924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39816 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.374937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.374950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.374961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.374972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39824 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.374984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.374997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39832 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39840 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39848 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39856 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39864 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39872 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39880 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39888 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39896 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39904 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39912 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39920 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39928 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39936 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39944 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39952 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39960 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39968 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39976 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.375953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39984 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.375965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.375978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.375989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.376000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39992 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.376026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.376037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.376048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40000 len:8 PRP1 0x0 PRP2 0x0 00:30:04.118 [2024-05-15 01:58:21.376060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.118 [2024-05-15 01:58:21.376073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.118 [2024-05-15 01:58:21.376084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.118 [2024-05-15 01:58:21.376095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39248 len:8 PRP1 0x0 PRP2 0x0 00:30:04.119 [2024-05-15 01:58:21.376108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.119 [2024-05-15 01:58:21.376120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:04.119 [2024-05-15 01:58:21.376131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:04.119 [2024-05-15 01:58:21.376148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39256 len:8 PRP1 0x0 PRP2 0x0 00:30:04.119 [2024-05-15 01:58:21.376161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.119 [2024-05-15 01:58:21.376233] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x201d3a0 was disconnected and freed. reset controller. 00:30:04.119 [2024-05-15 01:58:21.376254] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:04.119 [2024-05-15 01:58:21.376288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.119 [2024-05-15 01:58:21.376306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.119 [2024-05-15 01:58:21.376321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.119 [2024-05-15 01:58:21.376334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.119 [2024-05-15 01:58:21.376352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.119 [2024-05-15 01:58:21.376365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.119 [2024-05-15 01:58:21.376379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.119 [2024-05-15 01:58:21.376392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.119 [2024-05-15 01:58:21.376405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:04.119 [2024-05-15 01:58:21.376461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fee600 (9): Bad file descriptor 00:30:04.119 [2024-05-15 01:58:21.379724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:04.119 [2024-05-15 01:58:21.497284] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:04.119 00:30:04.119 Latency(us) 00:30:04.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.119 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:04.119 Verification LBA range: start 0x0 length 0x4000 00:30:04.119 NVMe0n1 : 15.00 8321.89 32.51 902.58 0.00 13847.95 582.54 17282.09 00:30:04.119 =================================================================================================================== 00:30:04.119 Total : 8321.89 32.51 902.58 0.00 13847.95 582.54 17282.09 00:30:04.119 Received shutdown signal, test time was about 15.000000 seconds 00:30:04.119 00:30:04.119 Latency(us) 00:30:04.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.119 =================================================================================================================== 00:30:04.119 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4172956 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4172956 /var/tmp/bdevperf.sock 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 4172956 ']' 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:04.119 [2024-05-15 01:58:27.721286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:04.119 [2024-05-15 01:58:27.965930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:04.119 01:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.683 NVMe0n1 00:30:04.683 01:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.941 00:30:04.941 01:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.505 00:30:05.505 01:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:05.505 01:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:05.762 01:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:06.019 01:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:09.295 01:58:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:09.295 01:58:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:09.295 01:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4173627 00:30:09.295 01:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:09.295 01:58:33 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4173627 00:30:10.667 0 00:30:10.667 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:10.667 [2024-05-15 01:58:27.251911] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:10.667 [2024-05-15 01:58:27.251994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172956 ] 00:30:10.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.667 [2024-05-15 01:58:27.320474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.667 [2024-05-15 01:58:27.400317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.667 [2024-05-15 01:58:29.796465] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:10.667 [2024-05-15 01:58:29.796552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.667 [2024-05-15 01:58:29.796589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.667 [2024-05-15 01:58:29.796607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.667 [2024-05-15 01:58:29.796621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.667 [2024-05-15 01:58:29.796635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.667 [2024-05-15 01:58:29.796648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.667 [2024-05-15 01:58:29.796670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.667 [2024-05-15 01:58:29.796696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.667 [2024-05-15 01:58:29.796716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:10.667 [2024-05-15 01:58:29.796757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:10.667 [2024-05-15 01:58:29.796788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0b600 (9): Bad file descriptor 00:30:10.667 [2024-05-15 01:58:29.889373] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:10.667 Running I/O for 1 seconds... 00:30:10.667 00:30:10.667 Latency(us) 00:30:10.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.667 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.667 Verification LBA range: start 0x0 length 0x4000 00:30:10.667 NVMe0n1 : 1.01 8388.76 32.77 0.00 0.00 15197.25 3470.98 12718.84 00:30:10.667 =================================================================================================================== 00:30:10.667 Total : 8388.76 32.77 0.00 0.00 15197.25 3470.98 12718.84 00:30:10.667 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:10.667 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:10.667 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:10.924 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:10.924 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:11.181 01:58:34 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:11.438 01:58:35 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4172956 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 4172956 ']' 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 4172956 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4172956 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4172956' 00:30:14.714 killing process with pid 4172956 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 4172956 00:30:14.714 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 4172956 00:30:14.972 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:14.972 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:15.230 rmmod nvme_tcp 00:30:15.230 rmmod nvme_fabrics 00:30:15.230 rmmod nvme_keyring 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4170695 ']' 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4170695 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 4170695 ']' 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 4170695 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:15.230 01:58:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4170695 00:30:15.230 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:15.230 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:15.230 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4170695' 00:30:15.230 killing process with pid 4170695 00:30:15.230 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 4170695 00:30:15.230 [2024-05-15 01:58:39.003165] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:15.230 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 4170695 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.489 01:58:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.386 01:58:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:17.386 00:30:17.386 real 0m35.379s 00:30:17.386 user 2m4.176s 00:30:17.386 sys 0m5.808s 00:30:17.387 01:58:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:17.387 01:58:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:17.387 ************************************ 00:30:17.387 END TEST nvmf_failover 00:30:17.387 ************************************ 00:30:17.387 01:58:41 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:17.387 01:58:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:17.387 01:58:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:17.387 01:58:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.644 ************************************ 00:30:17.644 START TEST nvmf_host_discovery 00:30:17.644 ************************************ 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:17.644 * Looking for test storage... 00:30:17.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:17.644 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:17.645 01:58:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:20.169 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:20.169 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:20.169 Found net devices under 0000:09:00.0: cvl_0_0 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:20.169 Found net devices under 0000:09:00.1: cvl_0_1 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:20.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:30:20.169 00:30:20.169 --- 10.0.0.2 ping statistics --- 00:30:20.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.169 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:30:20.169 00:30:20.169 --- 10.0.0.1 ping statistics --- 00:30:20.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.169 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4176632 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4176632 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 4176632 ']' 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:20.169 01:58:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.169 [2024-05-15 01:58:44.003719] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:20.169 [2024-05-15 01:58:44.003810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.169 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.169 [2024-05-15 01:58:44.082476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.428 [2024-05-15 01:58:44.168684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.428 [2024-05-15 01:58:44.168747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.428 [2024-05-15 01:58:44.168771] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.428 [2024-05-15 01:58:44.168784] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.428 [2024-05-15 01:58:44.168796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.428 [2024-05-15 01:58:44.168827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.428 [2024-05-15 01:58:44.317128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.428 [2024-05-15 01:58:44.325084] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:20.428 [2024-05-15 01:58:44.325437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.428 null0 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.428 null1 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4176651 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4176651 /tmp/host.sock 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 4176651 ']' 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:20.428 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:20.428 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.686 [2024-05-15 01:58:44.401376] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:20.686 [2024-05-15 01:58:44.401454] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176651 ] 00:30:20.686 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.686 [2024-05-15 01:58:44.477526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.686 [2024-05-15 01:58:44.559144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:20.945 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:20.946 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.203 [2024-05-15 01:58:44.942986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.203 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:21.204 01:58:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:30:21.204 01:58:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:30:22.134 [2024-05-15 01:58:45.726397] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:22.134 [2024-05-15 01:58:45.726428] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:22.134 [2024-05-15 01:58:45.726451] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.134 [2024-05-15 01:58:45.812746] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:22.134 [2024-05-15 01:58:46.037262] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:22.134 [2024-05-15 01:58:46.037288] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:22.392 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.393 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 [2024-05-15 01:58:46.387571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:22.651 [2024-05-15 01:58:46.388507] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:22.651 [2024-05-15 01:58:46.388562] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.651 [2024-05-15 01:58:46.475177] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:22.651 01:58:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:30:22.652 [2024-05-15 01:58:46.577865] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:22.652 [2024-05-15 01:58:46.577900] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:22.652 [2024-05-15 01:58:46.577910] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.057 [2024-05-15 01:58:47.612247] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:24.057 [2024-05-15 01:58:47.612308] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:24.057 [2024-05-15 01:58:47.617942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.057 [2024-05-15 01:58:47.617981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.057 [2024-05-15 01:58:47.617999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:30:24.057 [2024-05-15 01:58:47.618015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.057 [2024-05-15 01:58:47.618040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.057 [2024-05-15 01:58:47.618055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.057 [2024-05-15 01:58:47.618070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:24.057 [2024-05-15 01:58:47.618085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:24.057 [2024-05-15 01:58:47.618108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:24.057 [2024-05-15 01:58:47.627944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.057 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.057 [2024-05-15 01:58:47.637993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.057 [2024-05-15 01:58:47.638227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.057 [2024-05-15 01:58:47.638369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.057 [2024-05-15 01:58:47.638396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.057 [2024-05-15 01:58:47.638413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.638436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.638471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.638489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.638506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.638537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 [2024-05-15 01:58:47.648072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.058 [2024-05-15 01:58:47.648255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.648415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.648442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.058 [2024-05-15 01:58:47.648458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.648481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.648514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.648533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.648547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.648566] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.058 [2024-05-15 01:58:47.658148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.058 [2024-05-15 01:58:47.658336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.658445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:24.058 [2024-05-15 01:58:47.658472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.058 [2024-05-15 01:58:47.658495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.658519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.658551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.658565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.658579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.658644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:24.058 [2024-05-15 01:58:47.668238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.058 [2024-05-15 01:58:47.668448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.668603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.668629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.058 [2024-05-15 01:58:47.668646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.668668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.668689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.668703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.668717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.668737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 [2024-05-15 01:58:47.678339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.058 [2024-05-15 01:58:47.678529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.678697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.678726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.058 [2024-05-15 01:58:47.678744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.678769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.678798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.678814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.678829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.678865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 [2024-05-15 01:58:47.688410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.058 [2024-05-15 01:58:47.688581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.688730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.688756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.058 [2024-05-15 01:58:47.688772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.688794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.688815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.688830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.688843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.688861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.058 [2024-05-15 01:58:47.698482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:24.058 [2024-05-15 01:58:47.698637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.698772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.058 [2024-05-15 01:58:47.698801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c60 with addr=10.0.0.2, port=4420 00:30:24.058 [2024-05-15 01:58:47.698819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c60 is same with the state(5) to be set 00:30:24.058 [2024-05-15 01:58:47.698843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c60 (9): Bad file descriptor 00:30:24.058 [2024-05-15 01:58:47.698867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:24.058 [2024-05-15 01:58:47.698883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:24.058 [2024-05-15 01:58:47.698899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:24.058 [2024-05-15 01:58:47.698920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.058 [2024-05-15 01:58:47.698970] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:24.058 [2024-05-15 01:58:47.699002] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:24.058 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.059 01:58:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.449 [2024-05-15 01:58:48.984006] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:25.449 [2024-05-15 01:58:48.984036] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:25.449 [2024-05-15 01:58:48.984059] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:25.449 [2024-05-15 01:58:49.111480] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:25.449 [2024-05-15 01:58:49.177640] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:25.449 [2024-05-15 01:58:49.177685] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.449 request: 00:30:25.449 { 00:30:25.449 "name": "nvme", 00:30:25.449 "trtype": "tcp", 00:30:25.449 "traddr": "10.0.0.2", 00:30:25.449 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:25.449 "adrfam": "ipv4", 00:30:25.449 "trsvcid": "8009", 00:30:25.449 "wait_for_attach": true, 00:30:25.449 "method": "bdev_nvme_start_discovery", 00:30:25.449 "req_id": 1 00:30:25.449 } 00:30:25.449 Got JSON-RPC error response 00:30:25.449 response: 00:30:25.449 { 00:30:25.449 "code": -17, 00:30:25.449 "message": "File exists" 00:30:25.449 } 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.449 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.450 request: 00:30:25.450 { 00:30:25.450 "name": "nvme_second", 00:30:25.450 "trtype": "tcp", 00:30:25.450 "traddr": "10.0.0.2", 00:30:25.450 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:25.450 "adrfam": "ipv4", 00:30:25.450 "trsvcid": "8009", 00:30:25.450 "wait_for_attach": true, 00:30:25.450 "method": "bdev_nvme_start_discovery", 00:30:25.450 "req_id": 1 00:30:25.450 } 00:30:25.450 Got JSON-RPC error response 00:30:25.450 response: 00:30:25.450 { 00:30:25.450 "code": -17, 00:30:25.450 "message": "File exists" 00:30:25.450 } 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.450 01:58:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.821 [2024-05-15 01:58:50.373115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.821 [2024-05-15 01:58:50.373294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.821 [2024-05-15 01:58:50.373322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23be820 with addr=10.0.0.2, port=8010 00:30:26.821 [2024-05-15 01:58:50.373353] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:26.821 [2024-05-15 01:58:50.373368] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:26.821 [2024-05-15 01:58:50.373382] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:27.752 [2024-05-15 01:58:51.375460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.752 [2024-05-15 01:58:51.375647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.752 [2024-05-15 01:58:51.375677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23be820 with addr=10.0.0.2, port=8010 00:30:27.752 [2024-05-15 01:58:51.375698] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:27.752 [2024-05-15 01:58:51.375712] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:27.752 [2024-05-15 01:58:51.375726] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:28.685 [2024-05-15 01:58:52.377728] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:28.685 request: 00:30:28.685 { 00:30:28.685 "name": "nvme_second", 00:30:28.685 "trtype": "tcp", 00:30:28.685 "traddr": "10.0.0.2", 00:30:28.685 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:28.685 "adrfam": "ipv4", 00:30:28.685 "trsvcid": "8010", 00:30:28.685 "attach_timeout_ms": 3000, 00:30:28.685 "method": "bdev_nvme_start_discovery", 00:30:28.685 "req_id": 1 00:30:28.685 } 00:30:28.685 Got JSON-RPC error response 00:30:28.685 response: 00:30:28.685 { 00:30:28.685 "code": -110, 00:30:28.685 "message": "Connection timed out" 00:30:28.685 } 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4176651 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.685 rmmod nvme_tcp 00:30:28.685 rmmod nvme_fabrics 00:30:28.685 rmmod nvme_keyring 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4176632 ']' 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4176632 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 4176632 ']' 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 4176632 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4176632 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4176632' 00:30:28.685 killing process with pid 4176632 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 4176632 00:30:28.685 [2024-05-15 01:58:52.516571] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:28.685 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 4176632 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.943 01:58:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:31.470 00:30:31.470 real 0m13.470s 00:30:31.470 user 0m18.901s 00:30:31.470 sys 0m3.059s 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.470 ************************************ 00:30:31.470 END TEST nvmf_host_discovery 00:30:31.470 ************************************ 00:30:31.470 01:58:54 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:31.470 01:58:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:31.470 01:58:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:31.470 01:58:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.470 ************************************ 00:30:31.470 START TEST nvmf_host_multipath_status 00:30:31.470 ************************************ 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:31.470 * Looking for test storage... 00:30:31.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.470 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.471 01:58:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:33.370 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.370 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:33.371 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:33.371 Found net devices under 0000:09:00.0: cvl_0_0 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:33.371 Found net devices under 0000:09:00.1: cvl_0_1 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.371 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:33.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:30:33.629 00:30:33.629 --- 10.0.0.2 ping statistics --- 00:30:33.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.629 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:30:33.629 00:30:33.629 --- 10.0.0.1 ping statistics --- 00:30:33.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.629 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4179984 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4179984 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 4179984 ']' 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:33.629 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.630 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:33.630 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:33.630 [2024-05-15 01:58:57.383788] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:30:33.630 [2024-05-15 01:58:57.383858] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.630 [2024-05-15 01:58:57.455829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:33.630 [2024-05-15 01:58:57.534724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.630 [2024-05-15 01:58:57.534776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.630 [2024-05-15 01:58:57.534801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.630 [2024-05-15 01:58:57.534812] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.630 [2024-05-15 01:58:57.534822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.630 [2024-05-15 01:58:57.534885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.630 [2024-05-15 01:58:57.534890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4179984 00:30:33.887 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:34.145 [2024-05-15 01:58:57.882021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.145 01:58:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:34.402 Malloc0 00:30:34.402 01:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:34.660 01:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:34.918 01:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.175 [2024-05-15 01:58:58.914937] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:35.175 [2024-05-15 01:58:58.915212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.175 01:58:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:35.432 [2024-05-15 01:58:59.155863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4180262 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4180262 /var/tmp/bdevperf.sock 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 4180262 ']' 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:35.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:35.433 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:35.690 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:35.690 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:30:35.690 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:35.947 01:58:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:36.512 Nvme0n1 00:30:36.512 01:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:37.076 Nvme0n1 00:30:37.077 01:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:37.077 01:59:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:38.976 01:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:38.976 01:59:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:39.234 01:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:39.492 01:59:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:40.864 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:41.122 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:41.122 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:41.122 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.122 01:59:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:41.379 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.379 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:41.379 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.379 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:41.637 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.637 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:41.637 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.637 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:41.894 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:41.894 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:41.894 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:41.894 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:42.152 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:42.152 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:42.152 01:59:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:42.409 01:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:42.667 01:59:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:43.599 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:43.599 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:43.599 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.599 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:43.857 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:43.857 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:43.857 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:43.857 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:44.149 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.149 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:44.149 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.149 01:59:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:44.414 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.414 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:44.414 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.414 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:44.672 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.672 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:44.672 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.672 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:44.930 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:44.930 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:44.930 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:44.930 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:45.188 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:45.188 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:45.188 01:59:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:45.445 01:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:45.702 01:59:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:46.634 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:46.634 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:46.634 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.634 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:46.892 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:46.892 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:46.892 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:46.892 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:47.150 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:47.150 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:47.150 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.150 01:59:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:47.407 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.407 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:47.407 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.407 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:47.665 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.665 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:47.665 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.665 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:47.923 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:47.923 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:47.923 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:47.923 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:48.180 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:48.180 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:48.180 01:59:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:48.436 01:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:48.694 01:59:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:49.633 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:49.633 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:49.633 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.633 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:49.892 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.892 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:49.892 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.892 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.149 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.149 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.149 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.149 01:59:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.407 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.407 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.407 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.407 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:50.664 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.664 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:50.664 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.664 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:50.921 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.921 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:50.921 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.921 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.180 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:51.180 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:51.180 01:59:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:51.438 01:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:51.695 01:59:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:52.625 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:52.625 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:52.625 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.625 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.882 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:52.882 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:52.882 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.882 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.139 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.139 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.139 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.139 01:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.396 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.396 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.396 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.396 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.653 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.653 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:53.653 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.653 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.909 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.909 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:53.909 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.909 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.166 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:54.166 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:54.166 01:59:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:54.423 01:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:54.679 01:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:55.611 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:55.611 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:55.611 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.611 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:55.868 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:55.868 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:55.868 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.868 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.125 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.125 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.125 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.125 01:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.383 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.383 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.383 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.383 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:56.640 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.640 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:56.640 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.640 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:56.898 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.898 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:56.898 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.898 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.155 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.155 01:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:57.419 01:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:57.419 01:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:57.678 01:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:57.935 01:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:58.869 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:58.869 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:58.869 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.869 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:59.155 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.155 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:59.155 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.155 01:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.413 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.413 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.413 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.413 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.683 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.683 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.683 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.683 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.945 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.945 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:59.945 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.945 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:00.202 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.202 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:00.202 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:00.202 01:59:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.460 01:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:00.460 01:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:00.460 01:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:00.717 01:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:00.975 01:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:01.907 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:01.907 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:01.907 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.907 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:02.164 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.164 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:02.164 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.164 01:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.420 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.420 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.420 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.420 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.677 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.677 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.677 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.677 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.934 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.934 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:02.934 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.934 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:03.190 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.190 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:03.190 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.190 01:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.446 01:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.446 01:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:03.446 01:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:03.703 01:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:03.960 01:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:04.894 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:04.894 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:04.894 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.894 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.151 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.151 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:05.151 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.151 01:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.409 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.409 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.409 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.409 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.667 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.667 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.667 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.667 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:05.925 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.925 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:05.925 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.925 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.183 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.183 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:06.183 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.183 01:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.440 01:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.440 01:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:06.440 01:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:06.698 01:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:06.955 01:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:07.889 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:07.889 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:07.889 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.889 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.146 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.146 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:08.146 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.146 01:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:08.404 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.404 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:08.404 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.404 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:08.661 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.661 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:08.661 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.661 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:08.919 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.919 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:08.919 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.919 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.176 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.176 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:09.176 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.176 01:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4180262 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 4180262 ']' 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 4180262 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4180262 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4180262' 00:31:09.434 killing process with pid 4180262 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 4180262 00:31:09.434 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 4180262 00:31:09.710 Connection closed with partial response: 00:31:09.710 00:31:09.710 00:31:09.710 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4180262 00:31:09.710 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.710 [2024-05-15 01:58:59.217620] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:09.710 [2024-05-15 01:58:59.217711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180262 ] 00:31:09.710 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.710 [2024-05-15 01:58:59.285846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.710 [2024-05-15 01:58:59.369620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.710 Running I/O for 90 seconds... 00:31:09.710 [2024-05-15 01:59:15.184805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.710 [2024-05-15 01:59:15.184863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.710 [2024-05-15 01:59:15.184899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.710 [2024-05-15 01:59:15.184916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.710 [2024-05-15 01:59:15.184948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.710 [2024-05-15 01:59:15.184964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.710 [2024-05-15 01:59:15.184985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.185978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.185998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.711 [2024-05-15 01:59:15.186673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.711 [2024-05-15 01:59:15.186688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.186711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.712 [2024-05-15 01:59:15.186735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.186777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.712 [2024-05-15 01:59:15.186804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.186837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.712 [2024-05-15 01:59:15.186862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.186896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.186921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.186955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.186981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.187729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.187750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.188684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.188731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.188775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.188804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.188841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.188871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.188908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.188935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.188967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.188991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.712 [2024-05-15 01:59:15.189908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.712 [2024-05-15 01:59:15.189931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.189959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.189981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.190946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.190968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.191543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.191567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.192970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.192996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.713 [2024-05-15 01:59:15.193032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.713 [2024-05-15 01:59:15.193058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.714 [2024-05-15 01:59:15.193120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.714 [2024-05-15 01:59:15.193180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.714 [2024-05-15 01:59:15.193269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.714 [2024-05-15 01:59:15.193334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.193944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.193977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.194942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.194975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.714 [2024-05-15 01:59:15.195646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.714 [2024-05-15 01:59:15.195683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.195710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.195747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.195773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.195808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.195834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.195867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.195892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.195926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.195957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.195993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.715 [2024-05-15 01:59:15.196444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.196967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.196993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.197427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.197454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.198594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.198631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.198683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.198711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.198753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.198781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.198817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.198843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.198892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.198917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.198964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.198989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.715 [2024-05-15 01:59:15.199022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.715 [2024-05-15 01:59:15.199047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.199934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.199980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.200959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.200987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.201557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.202278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.716 [2024-05-15 01:59:15.202314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.716 [2024-05-15 01:59:15.202357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.202954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.202985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.717 [2024-05-15 01:59:15.203446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.203952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.203989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.204943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.204980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.205004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.205035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.205061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.205094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.205119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.717 [2024-05-15 01:59:15.205150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.717 [2024-05-15 01:59:15.205175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.205229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.205256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.205288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.205314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.205347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.718 [2024-05-15 01:59:15.217942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.217965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.217980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.218476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.218492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.718 [2024-05-15 01:59:15.219815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.718 [2024-05-15 01:59:15.219836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.219851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.219872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.219887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.219913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.219930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.219951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.219966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.219987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.220977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.220999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.221457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.221477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.222084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.222114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.222149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.222168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.719 [2024-05-15 01:59:15.222192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.719 [2024-05-15 01:59:15.222209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.720 [2024-05-15 01:59:15.222921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.222958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.222979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.222994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.223968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.223989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.224004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.224025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.224040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.224061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.224076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.224097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.720 [2024-05-15 01:59:15.224115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.720 [2024-05-15 01:59:15.224137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.721 [2024-05-15 01:59:15.224820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.224863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.224898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.224934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.224970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.224990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.225308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.225324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.721 [2024-05-15 01:59:15.226840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.721 [2024-05-15 01:59:15.226855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.226876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.226891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.226911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.226930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.226951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.226966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.226987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.227951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.227984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.228982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.228999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.229037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.229060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.229083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.229099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.229141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.229158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.229179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.229210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.722 [2024-05-15 01:59:15.229246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.722 [2024-05-15 01:59:15.229278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.723 [2024-05-15 01:59:15.229709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.230979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.230994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.231015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.231030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.231050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.231065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.231086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.723 [2024-05-15 01:59:15.231102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.723 [2024-05-15 01:59:15.231127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.724 [2024-05-15 01:59:15.231593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.231984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.231999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.232020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.232037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.232950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.232980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.724 [2024-05-15 01:59:15.233976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.724 [2024-05-15 01:59:15.233992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.234882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.234898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.235955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.235992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.725 [2024-05-15 01:59:15.236453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.725 [2024-05-15 01:59:15.236724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.725 [2024-05-15 01:59:15.236745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.236781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.236817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.236869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.236906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.236944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.236982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.236999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.237966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.237981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.726 [2024-05-15 01:59:15.238316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.726 [2024-05-15 01:59:15.238379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.726 [2024-05-15 01:59:15.238419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.726 [2024-05-15 01:59:15.238458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.726 [2024-05-15 01:59:15.238497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.726 [2024-05-15 01:59:15.238535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.726 [2024-05-15 01:59:15.238558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.726 [2024-05-15 01:59:15.238589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.238612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.238647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.238673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.238689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.238726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.238742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.245323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.245354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.245380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.245397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.245740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.245767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.245813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.245835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.245869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.245906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.245939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.245973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.246966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.246982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.247899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.247915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:15.248041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.727 [2024-05-15 01:59:15.248061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.727 [2024-05-15 01:59:30.720638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.720716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.720772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.720813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.720848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.720870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.720899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.720922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.720966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.720988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.721588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.721637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.721686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.721737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.721973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.721996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.722388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.722416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.724399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.724472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.724551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.724646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.724703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.724759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.724817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.724870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.724939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.724972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.724996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.725623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.725696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.728 [2024-05-15 01:59:30.725766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.725835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.725961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.725988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.728 [2024-05-15 01:59:30.726493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.728 [2024-05-15 01:59:30.726538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.726613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.726671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.726743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.726821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.726893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.726955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.726991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.727880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.727943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.727980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.728010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.728078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.728146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.728244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.728311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.728378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.728445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.728511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.728595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.728661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.728698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.728726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.729633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.729699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.729759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.729788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.729827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.729870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.729909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.729937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.729974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.730001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.730065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.730128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.730281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.730360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.730434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.730499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.730578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.730615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.729 [2024-05-15 01:59:30.730643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.731124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.731161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.731203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.729 [2024-05-15 01:59:30.731286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.729 [2024-05-15 01:59:30.731316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.731470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.731916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.731953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.731980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.732256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.732372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.732433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.732550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.732607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.732661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.732809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.732833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.733892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.733929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.733971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.734009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.734253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.734396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.734464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.734855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.734915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.734948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.734975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.735009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.735035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.735069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.735094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.735128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.735154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.735192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.735238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.735275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.735300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.738455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.738543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.738632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.738767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.738864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.730 [2024-05-15 01:59:30.738930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.738967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.730 [2024-05-15 01:59:30.738994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.730 [2024-05-15 01:59:30.739030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.739058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.739123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.739205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.739296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.739364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.739428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.739492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.739567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.739599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.739625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.740934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.740959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.741005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.741030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.741060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.741098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.741131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.741171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.741206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.741254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.742755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.742786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.742819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.742848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.742877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.742898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.742926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.742948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.742977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.742998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.743946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.743975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.743996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.745866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.745898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.745933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.731 [2024-05-15 01:59:30.745957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.745993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.746020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.746054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.746078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.746111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.746153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.746181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.746213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.746251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.731 [2024-05-15 01:59:30.746269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.731 [2024-05-15 01:59:30.746291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.746481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.746528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.746665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.746705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.746745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.746824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.746973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.746995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.747011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.747033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.747050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.747071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.747086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.747108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.747123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.747145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.747164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.749897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.749925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.749954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.749972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.749995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.750227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.750311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.750534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.750655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.750825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.750862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.750883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.750899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.751861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.751885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.751911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.751945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.751974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.751992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.752013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.752029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.752051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.752067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.752089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.752105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.752126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.752143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.752165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.732 [2024-05-15 01:59:30.752180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.732 [2024-05-15 01:59:30.752228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.732 [2024-05-15 01:59:30.752274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.752642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.752659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.753657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.753699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.753735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.753788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.753826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.753864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.753901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.753937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.753974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.753995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.754156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.754234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.754329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.754368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.754496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.754581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.755894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.755933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.755953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.755969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.756023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.756087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.756154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.756195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.756258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.756300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.756345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.756377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.756396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.757367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.757410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.757448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.733 [2024-05-15 01:59:30.757502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.757558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.757596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.757676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.757731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.733 [2024-05-15 01:59:30.757782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.733 [2024-05-15 01:59:30.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.757834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.757857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.757874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.757896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.757913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.757934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.757950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.757973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.757990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.758012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.758028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.758050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.758066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.758104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.758126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.758161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.758184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.758200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.758237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.758263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.759857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.759917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.759933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.760670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.760706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.760743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.760853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.760891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.760928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.760965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.760982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.761004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.761020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.761042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.761059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.761081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.761097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.761123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.761140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.762574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.762626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.762662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.762734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.734 [2024-05-15 01:59:30.762770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.734 [2024-05-15 01:59:30.762884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:09.734 [2024-05-15 01:59:30.762905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.762921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.762942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.762957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.762978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.762993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.763029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.763065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.763101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.763137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.763172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.763233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.763277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.763316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.763352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.763389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.763411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.763427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.765809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.765847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.765892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.765947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.765969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.765985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.766150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.766187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.766317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.766393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.766514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.766531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.767452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.767495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.767560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.767614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.767653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.767727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.735 [2024-05-15 01:59:30.767780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.767842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.767872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.767911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.768346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.768372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.768400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.735 [2024-05-15 01:59:30.768418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:09.735 [2024-05-15 01:59:30.768461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.768718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.768757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.768812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.768969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.768991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.769082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.769134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.769334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.769958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.769985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.770309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.770331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.770348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.771741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.771784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.771821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.771865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.771902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.771943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.771965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.771981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.772016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.772089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.772381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.772419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.772507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.772530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.772545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.775312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.775381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.775424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.775505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.736 [2024-05-15 01:59:30.775561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.736 [2024-05-15 01:59:30.775619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:09.736 [2024-05-15 01:59:30.775642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.737 [2024-05-15 01:59:30.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:09.737 [2024-05-15 01:59:30.775709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.737 [2024-05-15 01:59:30.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.737 [2024-05-15 01:59:30.775821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.737 [2024-05-15 01:59:30.775862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.737 [2024-05-15 01:59:30.775901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.737 [2024-05-15 01:59:30.775940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:09.737 [2024-05-15 01:59:30.775962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.737 [2024-05-15 01:59:30.775979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:09.737 Received shutdown signal, test time was about 32.275026 seconds 00:31:09.737 00:31:09.737 Latency(us) 00:31:09.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.737 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:09.737 Verification LBA range: start 0x0 length 0x4000 00:31:09.737 Nvme0n1 : 32.27 7901.70 30.87 0.00 0.00 16167.28 312.51 4076242.11 00:31:09.737 =================================================================================================================== 00:31:09.737 Total : 7901.70 30.87 0.00 0.00 16167.28 312.51 4076242.11 00:31:09.737 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.994 rmmod nvme_tcp 00:31:09.994 rmmod nvme_fabrics 00:31:09.994 rmmod nvme_keyring 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4179984 ']' 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4179984 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 4179984 ']' 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 4179984 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4179984 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4179984' 00:31:09.994 killing process with pid 4179984 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 4179984 00:31:09.994 [2024-05-15 01:59:33.833291] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:09.994 01:59:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 4179984 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.252 01:59:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.783 01:59:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:12.783 00:31:12.783 real 0m41.254s 00:31:12.783 user 2m1.409s 00:31:12.783 sys 0m11.606s 00:31:12.783 01:59:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:12.783 01:59:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:12.783 ************************************ 00:31:12.783 END TEST nvmf_host_multipath_status 00:31:12.783 ************************************ 00:31:12.783 01:59:36 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:12.783 01:59:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:12.783 01:59:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:12.783 01:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:12.783 ************************************ 00:31:12.783 START TEST nvmf_discovery_remove_ifc 00:31:12.783 ************************************ 00:31:12.783 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:12.783 * Looking for test storage... 00:31:12.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:12.783 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:12.783 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:12.784 01:59:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:14.706 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:14.706 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:14.706 Found net devices under 0000:09:00.0: cvl_0_0 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:14.706 Found net devices under 0000:09:00.1: cvl_0_1 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:14.706 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:14.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:31:14.975 00:31:14.975 --- 10.0.0.2 ping statistics --- 00:31:14.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.975 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:31:14.975 00:31:14.975 --- 10.0.0.1 ping statistics --- 00:31:14.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.975 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4186741 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4186741 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 4186741 ']' 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:14.975 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:14.975 [2024-05-15 01:59:38.722400] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:14.975 [2024-05-15 01:59:38.722483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.975 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.975 [2024-05-15 01:59:38.793329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.975 [2024-05-15 01:59:38.875575] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.975 [2024-05-15 01:59:38.875639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.975 [2024-05-15 01:59:38.875659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.975 [2024-05-15 01:59:38.875671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.975 [2024-05-15 01:59:38.875680] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.975 [2024-05-15 01:59:38.875717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.232 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:15.232 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:31:15.232 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:15.232 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:15.232 01:59:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.232 [2024-05-15 01:59:39.023952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.232 [2024-05-15 01:59:39.031916] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:15.232 [2024-05-15 01:59:39.032162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:15.232 null0 00:31:15.232 [2024-05-15 01:59:39.064069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4186763 00:31:15.232 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4186763 /tmp/host.sock 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 4186763 ']' 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:15.233 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:15.233 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.233 [2024-05-15 01:59:39.123488] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:31:15.233 [2024-05-15 01:59:39.123567] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186763 ] 00:31:15.233 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.490 [2024-05-15 01:59:39.190035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.490 [2024-05-15 01:59:39.269238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.490 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:15.490 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:31:15.490 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:15.490 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:15.491 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:15.491 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.491 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:15.491 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:15.491 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:15.491 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:15.748 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:15.748 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:15.748 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:15.748 01:59:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.679 [2024-05-15 01:59:40.501484] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:16.679 [2024-05-15 01:59:40.501553] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:16.679 [2024-05-15 01:59:40.501576] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:16.679 [2024-05-15 01:59:40.587826] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:16.936 [2024-05-15 01:59:40.764597] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:16.936 [2024-05-15 01:59:40.764659] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:16.936 [2024-05-15 01:59:40.764698] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:16.936 [2024-05-15 01:59:40.764721] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:16.936 [2024-05-15 01:59:40.764756] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.936 [2024-05-15 01:59:40.770375] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x143a7c0 was disconnected and freed. delete nvme_qpair. 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.936 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:17.193 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.193 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:17.193 01:59:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:18.124 01:59:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:19.056 01:59:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:20.425 01:59:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:20.425 01:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:20.425 01:59:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:21.357 01:59:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:22.289 01:59:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.289 [2024-05-15 01:59:46.206620] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:22.289 [2024-05-15 01:59:46.206685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.289 [2024-05-15 01:59:46.206707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.289 [2024-05-15 01:59:46.206726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.289 [2024-05-15 01:59:46.206740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.289 [2024-05-15 01:59:46.206753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.289 [2024-05-15 01:59:46.206765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.289 [2024-05-15 01:59:46.206778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.289 [2024-05-15 01:59:46.206790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.289 [2024-05-15 01:59:46.206804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.289 [2024-05-15 01:59:46.206816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.289 [2024-05-15 01:59:46.206829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401850 is same with the state(5) to be set 00:31:22.289 [2024-05-15 01:59:46.216653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1401850 (9): Bad file descriptor 00:31:22.546 [2024-05-15 01:59:46.226686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.480 01:59:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.480 [2024-05-15 01:59:47.233262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:24.411 [2024-05-15 01:59:48.257290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:24.411 [2024-05-15 01:59:48.257353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1401850 with addr=10.0.0.2, port=4420 00:31:24.411 [2024-05-15 01:59:48.257379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401850 is same with the state(5) to be set 00:31:24.411 [2024-05-15 01:59:48.257839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1401850 (9): Bad file descriptor 00:31:24.411 [2024-05-15 01:59:48.257884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.411 [2024-05-15 01:59:48.257926] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:24.411 [2024-05-15 01:59:48.257964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.411 [2024-05-15 01:59:48.257986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.411 [2024-05-15 01:59:48.258005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.411 [2024-05-15 01:59:48.258018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.411 [2024-05-15 01:59:48.258030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.411 [2024-05-15 01:59:48.258042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.411 [2024-05-15 01:59:48.258055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.412 [2024-05-15 01:59:48.258066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.412 [2024-05-15 01:59:48.258079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.412 [2024-05-15 01:59:48.258091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.412 [2024-05-15 01:59:48.258103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:24.412 [2024-05-15 01:59:48.258413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400ca0 (9): Bad file descriptor 00:31:24.412 [2024-05-15 01:59:48.259439] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:24.412 [2024-05-15 01:59:48.259463] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:24.412 01:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:24.412 01:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:24.412 01:59:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:25.783 01:59:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:26.347 [2024-05-15 01:59:50.277366] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:26.347 [2024-05-15 01:59:50.277399] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:26.347 [2024-05-15 01:59:50.277424] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.605 [2024-05-15 01:59:50.363729] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.605 [2024-05-15 01:59:50.419607] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:26.605 [2024-05-15 01:59:50.419652] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:26.605 [2024-05-15 01:59:50.419682] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:26.605 [2024-05-15 01:59:50.419711] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:26.605 [2024-05-15 01:59:50.419725] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.605 [2024-05-15 01:59:50.425717] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x143c2d0 was disconnected and freed. delete nvme_qpair. 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:26.605 01:59:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.535 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.791 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4186763 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 4186763 ']' 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 4186763 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4186763 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4186763' 00:31:27.792 killing process with pid 4186763 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 4186763 00:31:27.792 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 4186763 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.049 rmmod nvme_tcp 00:31:28.049 rmmod nvme_fabrics 00:31:28.049 rmmod nvme_keyring 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4186741 ']' 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4186741 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 4186741 ']' 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 4186741 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 4186741 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 4186741' 00:31:28.049 killing process with pid 4186741 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 4186741 00:31:28.049 [2024-05-15 01:59:51.848343] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:28.049 01:59:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 4186741 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.306 01:59:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.206 01:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:30.206 00:31:30.206 real 0m17.981s 00:31:30.206 user 0m24.667s 00:31:30.206 sys 0m3.272s 00:31:30.206 01:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:30.206 01:59:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.206 ************************************ 00:31:30.206 END TEST nvmf_discovery_remove_ifc 00:31:30.206 ************************************ 00:31:30.463 01:59:54 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:30.463 01:59:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:30.463 01:59:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:30.463 01:59:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:30.463 ************************************ 00:31:30.463 START TEST nvmf_identify_kernel_target 00:31:30.463 ************************************ 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:30.463 * Looking for test storage... 00:31:30.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:30.463 01:59:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:32.992 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:32.992 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:32.992 Found net devices under 0000:09:00.0: cvl_0_0 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:32.992 Found net devices under 0000:09:00.1: cvl_0_1 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.992 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:31:32.993 00:31:32.993 --- 10.0.0.2 ping statistics --- 00:31:32.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.993 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:31:32.993 00:31:32.993 --- 10.0.0.1 ping statistics --- 00:31:32.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.993 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:32.993 01:59:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:34.429 Waiting for block devices as requested 00:31:34.429 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:34.429 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:34.430 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:34.430 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:34.430 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:34.688 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:34.688 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:34.688 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:34.688 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:34.946 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:34.946 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:34.946 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:34.946 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:35.203 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:35.203 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:35.203 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:35.203 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:35.464 No valid GPT data, bailing 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:31:35.464 00:31:35.464 Discovery Log Number of Records 2, Generation counter 2 00:31:35.464 =====Discovery Log Entry 0====== 00:31:35.464 trtype: tcp 00:31:35.464 adrfam: ipv4 00:31:35.464 subtype: current discovery subsystem 00:31:35.464 treq: not specified, sq flow control disable supported 00:31:35.464 portid: 1 00:31:35.464 trsvcid: 4420 00:31:35.464 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:35.464 traddr: 10.0.0.1 00:31:35.464 eflags: none 00:31:35.464 sectype: none 00:31:35.464 =====Discovery Log Entry 1====== 00:31:35.464 trtype: tcp 00:31:35.464 adrfam: ipv4 00:31:35.464 subtype: nvme subsystem 00:31:35.464 treq: not specified, sq flow control disable supported 00:31:35.464 portid: 1 00:31:35.464 trsvcid: 4420 00:31:35.464 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:35.464 traddr: 10.0.0.1 00:31:35.464 eflags: none 00:31:35.464 sectype: none 00:31:35.464 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:35.464 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:35.464 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.464 ===================================================== 00:31:35.464 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:35.464 ===================================================== 00:31:35.464 Controller Capabilities/Features 00:31:35.464 ================================ 00:31:35.464 Vendor ID: 0000 00:31:35.464 Subsystem Vendor ID: 0000 00:31:35.464 Serial Number: 05656adf72c51cc09369 00:31:35.464 Model Number: Linux 00:31:35.464 Firmware Version: 6.7.0-68 00:31:35.464 Recommended Arb Burst: 0 00:31:35.464 IEEE OUI Identifier: 00 00 00 00:31:35.464 Multi-path I/O 00:31:35.464 May have multiple subsystem ports: No 00:31:35.464 May have multiple controllers: No 00:31:35.464 Associated with SR-IOV VF: No 00:31:35.464 Max Data Transfer Size: Unlimited 00:31:35.464 Max Number of Namespaces: 0 00:31:35.464 Max Number of I/O Queues: 1024 00:31:35.464 NVMe Specification Version (VS): 1.3 00:31:35.464 NVMe Specification Version (Identify): 1.3 00:31:35.464 Maximum Queue Entries: 1024 00:31:35.464 Contiguous Queues Required: No 00:31:35.464 Arbitration Mechanisms Supported 00:31:35.464 Weighted Round Robin: Not Supported 00:31:35.464 Vendor Specific: Not Supported 00:31:35.464 Reset Timeout: 7500 ms 00:31:35.464 Doorbell Stride: 4 bytes 00:31:35.464 NVM Subsystem Reset: Not Supported 00:31:35.464 Command Sets Supported 00:31:35.464 NVM Command Set: Supported 00:31:35.464 Boot Partition: Not Supported 00:31:35.464 Memory Page Size Minimum: 4096 bytes 00:31:35.464 Memory Page Size Maximum: 4096 bytes 00:31:35.464 Persistent Memory Region: Not Supported 00:31:35.464 Optional Asynchronous Events Supported 00:31:35.464 Namespace Attribute Notices: Not Supported 00:31:35.464 Firmware Activation Notices: Not Supported 00:31:35.464 ANA Change Notices: Not Supported 00:31:35.464 PLE Aggregate Log Change Notices: Not Supported 00:31:35.464 LBA Status Info Alert Notices: Not Supported 00:31:35.464 EGE Aggregate Log Change Notices: Not Supported 00:31:35.464 Normal NVM Subsystem Shutdown event: Not Supported 00:31:35.464 Zone Descriptor Change Notices: Not Supported 00:31:35.464 Discovery Log Change Notices: Supported 00:31:35.464 Controller Attributes 00:31:35.464 128-bit Host Identifier: Not Supported 00:31:35.464 Non-Operational Permissive Mode: Not Supported 00:31:35.464 NVM Sets: Not Supported 00:31:35.464 Read Recovery Levels: Not Supported 00:31:35.464 Endurance Groups: Not Supported 00:31:35.464 Predictable Latency Mode: Not Supported 00:31:35.464 Traffic Based Keep ALive: Not Supported 00:31:35.464 Namespace Granularity: Not Supported 00:31:35.464 SQ Associations: Not Supported 00:31:35.464 UUID List: Not Supported 00:31:35.464 Multi-Domain Subsystem: Not Supported 00:31:35.464 Fixed Capacity Management: Not Supported 00:31:35.464 Variable Capacity Management: Not Supported 00:31:35.464 Delete Endurance Group: Not Supported 00:31:35.464 Delete NVM Set: Not Supported 00:31:35.464 Extended LBA Formats Supported: Not Supported 00:31:35.464 Flexible Data Placement Supported: Not Supported 00:31:35.464 00:31:35.464 Controller Memory Buffer Support 00:31:35.464 ================================ 00:31:35.464 Supported: No 00:31:35.464 00:31:35.465 Persistent Memory Region Support 00:31:35.465 ================================ 00:31:35.465 Supported: No 00:31:35.465 00:31:35.465 Admin Command Set Attributes 00:31:35.465 ============================ 00:31:35.465 Security Send/Receive: Not Supported 00:31:35.465 Format NVM: Not Supported 00:31:35.465 Firmware Activate/Download: Not Supported 00:31:35.465 Namespace Management: Not Supported 00:31:35.465 Device Self-Test: Not Supported 00:31:35.465 Directives: Not Supported 00:31:35.465 NVMe-MI: Not Supported 00:31:35.465 Virtualization Management: Not Supported 00:31:35.465 Doorbell Buffer Config: Not Supported 00:31:35.465 Get LBA Status Capability: Not Supported 00:31:35.465 Command & Feature Lockdown Capability: Not Supported 00:31:35.465 Abort Command Limit: 1 00:31:35.465 Async Event Request Limit: 1 00:31:35.465 Number of Firmware Slots: N/A 00:31:35.465 Firmware Slot 1 Read-Only: N/A 00:31:35.465 Firmware Activation Without Reset: N/A 00:31:35.465 Multiple Update Detection Support: N/A 00:31:35.465 Firmware Update Granularity: No Information Provided 00:31:35.465 Per-Namespace SMART Log: No 00:31:35.465 Asymmetric Namespace Access Log Page: Not Supported 00:31:35.465 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:35.465 Command Effects Log Page: Not Supported 00:31:35.465 Get Log Page Extended Data: Supported 00:31:35.465 Telemetry Log Pages: Not Supported 00:31:35.465 Persistent Event Log Pages: Not Supported 00:31:35.465 Supported Log Pages Log Page: May Support 00:31:35.465 Commands Supported & Effects Log Page: Not Supported 00:31:35.465 Feature Identifiers & Effects Log Page:May Support 00:31:35.465 NVMe-MI Commands & Effects Log Page: May Support 00:31:35.465 Data Area 4 for Telemetry Log: Not Supported 00:31:35.465 Error Log Page Entries Supported: 1 00:31:35.465 Keep Alive: Not Supported 00:31:35.465 00:31:35.465 NVM Command Set Attributes 00:31:35.465 ========================== 00:31:35.465 Submission Queue Entry Size 00:31:35.465 Max: 1 00:31:35.465 Min: 1 00:31:35.465 Completion Queue Entry Size 00:31:35.465 Max: 1 00:31:35.465 Min: 1 00:31:35.465 Number of Namespaces: 0 00:31:35.465 Compare Command: Not Supported 00:31:35.465 Write Uncorrectable Command: Not Supported 00:31:35.465 Dataset Management Command: Not Supported 00:31:35.465 Write Zeroes Command: Not Supported 00:31:35.465 Set Features Save Field: Not Supported 00:31:35.465 Reservations: Not Supported 00:31:35.465 Timestamp: Not Supported 00:31:35.465 Copy: Not Supported 00:31:35.465 Volatile Write Cache: Not Present 00:31:35.465 Atomic Write Unit (Normal): 1 00:31:35.465 Atomic Write Unit (PFail): 1 00:31:35.465 Atomic Compare & Write Unit: 1 00:31:35.465 Fused Compare & Write: Not Supported 00:31:35.465 Scatter-Gather List 00:31:35.465 SGL Command Set: Supported 00:31:35.465 SGL Keyed: Not Supported 00:31:35.465 SGL Bit Bucket Descriptor: Not Supported 00:31:35.465 SGL Metadata Pointer: Not Supported 00:31:35.465 Oversized SGL: Not Supported 00:31:35.465 SGL Metadata Address: Not Supported 00:31:35.465 SGL Offset: Supported 00:31:35.465 Transport SGL Data Block: Not Supported 00:31:35.465 Replay Protected Memory Block: Not Supported 00:31:35.465 00:31:35.465 Firmware Slot Information 00:31:35.465 ========================= 00:31:35.465 Active slot: 0 00:31:35.465 00:31:35.465 00:31:35.465 Error Log 00:31:35.465 ========= 00:31:35.465 00:31:35.465 Active Namespaces 00:31:35.465 ================= 00:31:35.465 Discovery Log Page 00:31:35.465 ================== 00:31:35.465 Generation Counter: 2 00:31:35.465 Number of Records: 2 00:31:35.465 Record Format: 0 00:31:35.465 00:31:35.465 Discovery Log Entry 0 00:31:35.465 ---------------------- 00:31:35.465 Transport Type: 3 (TCP) 00:31:35.465 Address Family: 1 (IPv4) 00:31:35.465 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:35.465 Entry Flags: 00:31:35.465 Duplicate Returned Information: 0 00:31:35.465 Explicit Persistent Connection Support for Discovery: 0 00:31:35.465 Transport Requirements: 00:31:35.465 Secure Channel: Not Specified 00:31:35.465 Port ID: 1 (0x0001) 00:31:35.465 Controller ID: 65535 (0xffff) 00:31:35.465 Admin Max SQ Size: 32 00:31:35.465 Transport Service Identifier: 4420 00:31:35.465 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:35.465 Transport Address: 10.0.0.1 00:31:35.465 Discovery Log Entry 1 00:31:35.465 ---------------------- 00:31:35.465 Transport Type: 3 (TCP) 00:31:35.465 Address Family: 1 (IPv4) 00:31:35.465 Subsystem Type: 2 (NVM Subsystem) 00:31:35.465 Entry Flags: 00:31:35.465 Duplicate Returned Information: 0 00:31:35.465 Explicit Persistent Connection Support for Discovery: 0 00:31:35.465 Transport Requirements: 00:31:35.465 Secure Channel: Not Specified 00:31:35.465 Port ID: 1 (0x0001) 00:31:35.465 Controller ID: 65535 (0xffff) 00:31:35.465 Admin Max SQ Size: 32 00:31:35.465 Transport Service Identifier: 4420 00:31:35.465 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:35.465 Transport Address: 10.0.0.1 00:31:35.465 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.465 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.465 get_feature(0x01) failed 00:31:35.465 get_feature(0x02) failed 00:31:35.465 get_feature(0x04) failed 00:31:35.465 ===================================================== 00:31:35.465 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:35.465 ===================================================== 00:31:35.465 Controller Capabilities/Features 00:31:35.465 ================================ 00:31:35.465 Vendor ID: 0000 00:31:35.465 Subsystem Vendor ID: 0000 00:31:35.465 Serial Number: ff523f608c3336f13ffc 00:31:35.465 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:35.465 Firmware Version: 6.7.0-68 00:31:35.465 Recommended Arb Burst: 6 00:31:35.465 IEEE OUI Identifier: 00 00 00 00:31:35.465 Multi-path I/O 00:31:35.465 May have multiple subsystem ports: Yes 00:31:35.465 May have multiple controllers: Yes 00:31:35.465 Associated with SR-IOV VF: No 00:31:35.465 Max Data Transfer Size: Unlimited 00:31:35.465 Max Number of Namespaces: 1024 00:31:35.465 Max Number of I/O Queues: 128 00:31:35.465 NVMe Specification Version (VS): 1.3 00:31:35.465 NVMe Specification Version (Identify): 1.3 00:31:35.465 Maximum Queue Entries: 1024 00:31:35.465 Contiguous Queues Required: No 00:31:35.465 Arbitration Mechanisms Supported 00:31:35.465 Weighted Round Robin: Not Supported 00:31:35.465 Vendor Specific: Not Supported 00:31:35.465 Reset Timeout: 7500 ms 00:31:35.465 Doorbell Stride: 4 bytes 00:31:35.465 NVM Subsystem Reset: Not Supported 00:31:35.465 Command Sets Supported 00:31:35.465 NVM Command Set: Supported 00:31:35.465 Boot Partition: Not Supported 00:31:35.465 Memory Page Size Minimum: 4096 bytes 00:31:35.465 Memory Page Size Maximum: 4096 bytes 00:31:35.465 Persistent Memory Region: Not Supported 00:31:35.465 Optional Asynchronous Events Supported 00:31:35.465 Namespace Attribute Notices: Supported 00:31:35.465 Firmware Activation Notices: Not Supported 00:31:35.465 ANA Change Notices: Supported 00:31:35.465 PLE Aggregate Log Change Notices: Not Supported 00:31:35.465 LBA Status Info Alert Notices: Not Supported 00:31:35.465 EGE Aggregate Log Change Notices: Not Supported 00:31:35.465 Normal NVM Subsystem Shutdown event: Not Supported 00:31:35.466 Zone Descriptor Change Notices: Not Supported 00:31:35.466 Discovery Log Change Notices: Not Supported 00:31:35.466 Controller Attributes 00:31:35.466 128-bit Host Identifier: Supported 00:31:35.466 Non-Operational Permissive Mode: Not Supported 00:31:35.466 NVM Sets: Not Supported 00:31:35.466 Read Recovery Levels: Not Supported 00:31:35.466 Endurance Groups: Not Supported 00:31:35.466 Predictable Latency Mode: Not Supported 00:31:35.466 Traffic Based Keep ALive: Supported 00:31:35.466 Namespace Granularity: Not Supported 00:31:35.466 SQ Associations: Not Supported 00:31:35.466 UUID List: Not Supported 00:31:35.466 Multi-Domain Subsystem: Not Supported 00:31:35.466 Fixed Capacity Management: Not Supported 00:31:35.466 Variable Capacity Management: Not Supported 00:31:35.466 Delete Endurance Group: Not Supported 00:31:35.466 Delete NVM Set: Not Supported 00:31:35.466 Extended LBA Formats Supported: Not Supported 00:31:35.466 Flexible Data Placement Supported: Not Supported 00:31:35.466 00:31:35.466 Controller Memory Buffer Support 00:31:35.466 ================================ 00:31:35.466 Supported: No 00:31:35.466 00:31:35.466 Persistent Memory Region Support 00:31:35.466 ================================ 00:31:35.466 Supported: No 00:31:35.466 00:31:35.466 Admin Command Set Attributes 00:31:35.466 ============================ 00:31:35.466 Security Send/Receive: Not Supported 00:31:35.466 Format NVM: Not Supported 00:31:35.466 Firmware Activate/Download: Not Supported 00:31:35.466 Namespace Management: Not Supported 00:31:35.466 Device Self-Test: Not Supported 00:31:35.466 Directives: Not Supported 00:31:35.466 NVMe-MI: Not Supported 00:31:35.466 Virtualization Management: Not Supported 00:31:35.466 Doorbell Buffer Config: Not Supported 00:31:35.466 Get LBA Status Capability: Not Supported 00:31:35.466 Command & Feature Lockdown Capability: Not Supported 00:31:35.466 Abort Command Limit: 4 00:31:35.466 Async Event Request Limit: 4 00:31:35.466 Number of Firmware Slots: N/A 00:31:35.466 Firmware Slot 1 Read-Only: N/A 00:31:35.466 Firmware Activation Without Reset: N/A 00:31:35.466 Multiple Update Detection Support: N/A 00:31:35.466 Firmware Update Granularity: No Information Provided 00:31:35.466 Per-Namespace SMART Log: Yes 00:31:35.466 Asymmetric Namespace Access Log Page: Supported 00:31:35.466 ANA Transition Time : 10 sec 00:31:35.466 00:31:35.466 Asymmetric Namespace Access Capabilities 00:31:35.466 ANA Optimized State : Supported 00:31:35.466 ANA Non-Optimized State : Supported 00:31:35.466 ANA Inaccessible State : Supported 00:31:35.466 ANA Persistent Loss State : Supported 00:31:35.466 ANA Change State : Supported 00:31:35.466 ANAGRPID is not changed : No 00:31:35.466 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:35.466 00:31:35.466 ANA Group Identifier Maximum : 128 00:31:35.466 Number of ANA Group Identifiers : 128 00:31:35.466 Max Number of Allowed Namespaces : 1024 00:31:35.466 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:35.466 Command Effects Log Page: Supported 00:31:35.466 Get Log Page Extended Data: Supported 00:31:35.466 Telemetry Log Pages: Not Supported 00:31:35.466 Persistent Event Log Pages: Not Supported 00:31:35.466 Supported Log Pages Log Page: May Support 00:31:35.466 Commands Supported & Effects Log Page: Not Supported 00:31:35.466 Feature Identifiers & Effects Log Page:May Support 00:31:35.466 NVMe-MI Commands & Effects Log Page: May Support 00:31:35.466 Data Area 4 for Telemetry Log: Not Supported 00:31:35.466 Error Log Page Entries Supported: 128 00:31:35.466 Keep Alive: Supported 00:31:35.466 Keep Alive Granularity: 1000 ms 00:31:35.466 00:31:35.466 NVM Command Set Attributes 00:31:35.466 ========================== 00:31:35.466 Submission Queue Entry Size 00:31:35.466 Max: 64 00:31:35.466 Min: 64 00:31:35.466 Completion Queue Entry Size 00:31:35.466 Max: 16 00:31:35.466 Min: 16 00:31:35.466 Number of Namespaces: 1024 00:31:35.466 Compare Command: Not Supported 00:31:35.466 Write Uncorrectable Command: Not Supported 00:31:35.466 Dataset Management Command: Supported 00:31:35.466 Write Zeroes Command: Supported 00:31:35.466 Set Features Save Field: Not Supported 00:31:35.466 Reservations: Not Supported 00:31:35.466 Timestamp: Not Supported 00:31:35.466 Copy: Not Supported 00:31:35.466 Volatile Write Cache: Present 00:31:35.466 Atomic Write Unit (Normal): 1 00:31:35.466 Atomic Write Unit (PFail): 1 00:31:35.466 Atomic Compare & Write Unit: 1 00:31:35.466 Fused Compare & Write: Not Supported 00:31:35.466 Scatter-Gather List 00:31:35.466 SGL Command Set: Supported 00:31:35.466 SGL Keyed: Not Supported 00:31:35.466 SGL Bit Bucket Descriptor: Not Supported 00:31:35.466 SGL Metadata Pointer: Not Supported 00:31:35.466 Oversized SGL: Not Supported 00:31:35.466 SGL Metadata Address: Not Supported 00:31:35.466 SGL Offset: Supported 00:31:35.466 Transport SGL Data Block: Not Supported 00:31:35.466 Replay Protected Memory Block: Not Supported 00:31:35.466 00:31:35.466 Firmware Slot Information 00:31:35.466 ========================= 00:31:35.466 Active slot: 0 00:31:35.466 00:31:35.466 Asymmetric Namespace Access 00:31:35.466 =========================== 00:31:35.466 Change Count : 0 00:31:35.466 Number of ANA Group Descriptors : 1 00:31:35.466 ANA Group Descriptor : 0 00:31:35.466 ANA Group ID : 1 00:31:35.466 Number of NSID Values : 1 00:31:35.466 Change Count : 0 00:31:35.466 ANA State : 1 00:31:35.466 Namespace Identifier : 1 00:31:35.466 00:31:35.466 Commands Supported and Effects 00:31:35.466 ============================== 00:31:35.466 Admin Commands 00:31:35.466 -------------- 00:31:35.466 Get Log Page (02h): Supported 00:31:35.466 Identify (06h): Supported 00:31:35.466 Abort (08h): Supported 00:31:35.466 Set Features (09h): Supported 00:31:35.466 Get Features (0Ah): Supported 00:31:35.466 Asynchronous Event Request (0Ch): Supported 00:31:35.466 Keep Alive (18h): Supported 00:31:35.466 I/O Commands 00:31:35.466 ------------ 00:31:35.466 Flush (00h): Supported 00:31:35.466 Write (01h): Supported LBA-Change 00:31:35.466 Read (02h): Supported 00:31:35.466 Write Zeroes (08h): Supported LBA-Change 00:31:35.466 Dataset Management (09h): Supported 00:31:35.466 00:31:35.466 Error Log 00:31:35.466 ========= 00:31:35.466 Entry: 0 00:31:35.466 Error Count: 0x3 00:31:35.466 Submission Queue Id: 0x0 00:31:35.466 Command Id: 0x5 00:31:35.467 Phase Bit: 0 00:31:35.467 Status Code: 0x2 00:31:35.467 Status Code Type: 0x0 00:31:35.467 Do Not Retry: 1 00:31:35.467 Error Location: 0x28 00:31:35.467 LBA: 0x0 00:31:35.467 Namespace: 0x0 00:31:35.467 Vendor Log Page: 0x0 00:31:35.467 ----------- 00:31:35.467 Entry: 1 00:31:35.467 Error Count: 0x2 00:31:35.467 Submission Queue Id: 0x0 00:31:35.467 Command Id: 0x5 00:31:35.467 Phase Bit: 0 00:31:35.467 Status Code: 0x2 00:31:35.467 Status Code Type: 0x0 00:31:35.467 Do Not Retry: 1 00:31:35.467 Error Location: 0x28 00:31:35.467 LBA: 0x0 00:31:35.467 Namespace: 0x0 00:31:35.467 Vendor Log Page: 0x0 00:31:35.467 ----------- 00:31:35.467 Entry: 2 00:31:35.467 Error Count: 0x1 00:31:35.467 Submission Queue Id: 0x0 00:31:35.467 Command Id: 0x4 00:31:35.467 Phase Bit: 0 00:31:35.467 Status Code: 0x2 00:31:35.467 Status Code Type: 0x0 00:31:35.467 Do Not Retry: 1 00:31:35.467 Error Location: 0x28 00:31:35.467 LBA: 0x0 00:31:35.467 Namespace: 0x0 00:31:35.467 Vendor Log Page: 0x0 00:31:35.467 00:31:35.467 Number of Queues 00:31:35.467 ================ 00:31:35.467 Number of I/O Submission Queues: 128 00:31:35.467 Number of I/O Completion Queues: 128 00:31:35.467 00:31:35.467 ZNS Specific Controller Data 00:31:35.467 ============================ 00:31:35.467 Zone Append Size Limit: 0 00:31:35.467 00:31:35.467 00:31:35.467 Active Namespaces 00:31:35.467 ================= 00:31:35.467 get_feature(0x05) failed 00:31:35.467 Namespace ID:1 00:31:35.467 Command Set Identifier: NVM (00h) 00:31:35.467 Deallocate: Supported 00:31:35.467 Deallocated/Unwritten Error: Not Supported 00:31:35.467 Deallocated Read Value: Unknown 00:31:35.467 Deallocate in Write Zeroes: Not Supported 00:31:35.467 Deallocated Guard Field: 0xFFFF 00:31:35.467 Flush: Supported 00:31:35.467 Reservation: Not Supported 00:31:35.467 Namespace Sharing Capabilities: Multiple Controllers 00:31:35.467 Size (in LBAs): 1953525168 (931GiB) 00:31:35.467 Capacity (in LBAs): 1953525168 (931GiB) 00:31:35.467 Utilization (in LBAs): 1953525168 (931GiB) 00:31:35.467 UUID: 324322cb-cfe1-46c1-871c-3df9fff060ba 00:31:35.467 Thin Provisioning: Not Supported 00:31:35.467 Per-NS Atomic Units: Yes 00:31:35.467 Atomic Boundary Size (Normal): 0 00:31:35.467 Atomic Boundary Size (PFail): 0 00:31:35.467 Atomic Boundary Offset: 0 00:31:35.467 NGUID/EUI64 Never Reused: No 00:31:35.467 ANA group ID: 1 00:31:35.467 Namespace Write Protected: No 00:31:35.467 Number of LBA Formats: 1 00:31:35.467 Current LBA Format: LBA Format #00 00:31:35.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:35.467 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:35.467 rmmod nvme_tcp 00:31:35.467 rmmod nvme_fabrics 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.467 01:59:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:37.994 02:00:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:38.924 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:38.924 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:38.924 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:38.924 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:38.924 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:39.181 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:39.181 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:39.181 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:39.181 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:39.181 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:40.111 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:31:40.111 00:31:40.111 real 0m9.754s 00:31:40.111 user 0m2.165s 00:31:40.111 sys 0m3.822s 00:31:40.111 02:00:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:40.111 02:00:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.111 ************************************ 00:31:40.111 END TEST nvmf_identify_kernel_target 00:31:40.111 ************************************ 00:31:40.111 02:00:03 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:40.111 02:00:03 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:40.111 02:00:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:40.111 02:00:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.111 ************************************ 00:31:40.111 START TEST nvmf_auth_host 00:31:40.111 ************************************ 00:31:40.111 02:00:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:40.369 * Looking for test storage... 00:31:40.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.369 02:00:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:40.370 02:00:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.895 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.895 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.895 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.895 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.895 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:42.896 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:42.896 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:42.896 Found net devices under 0000:09:00.0: cvl_0_0 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:42.896 Found net devices under 0000:09:00.1: cvl_0_1 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:31:42.896 00:31:42.896 --- 10.0.0.2 ping statistics --- 00:31:42.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.896 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:31:42.896 00:31:42.896 --- 10.0.0.1 ping statistics --- 00:31:42.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.896 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:42.896 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=950 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 950 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 950 ']' 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:42.897 02:00:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=951ce15801e82e67ec464e14d153cd45 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Tmc 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 951ce15801e82e67ec464e14d153cd45 0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 951ce15801e82e67ec464e14d153cd45 0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=951ce15801e82e67ec464e14d153cd45 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Tmc 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Tmc 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Tmc 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6dd2d6d64fd803689c6819e17b80a28978f577e37a05914da761bcde05bf635 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RHd 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6dd2d6d64fd803689c6819e17b80a28978f577e37a05914da761bcde05bf635 3 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6dd2d6d64fd803689c6819e17b80a28978f577e37a05914da761bcde05bf635 3 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6dd2d6d64fd803689c6819e17b80a28978f577e37a05914da761bcde05bf635 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RHd 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RHd 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RHd 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1161d50662166cea11607add1b47d8244a2d541269bc5378 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.64Y 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1161d50662166cea11607add1b47d8244a2d541269bc5378 0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1161d50662166cea11607add1b47d8244a2d541269bc5378 0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1161d50662166cea11607add1b47d8244a2d541269bc5378 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.64Y 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.64Y 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.64Y 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=643c88dc52723af02afff88ee08347f8f7374bde30ab39eb 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3nN 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 643c88dc52723af02afff88ee08347f8f7374bde30ab39eb 2 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 643c88dc52723af02afff88ee08347f8f7374bde30ab39eb 2 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=643c88dc52723af02afff88ee08347f8f7374bde30ab39eb 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3nN 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3nN 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3nN 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57dda0057fc96b1b633235701a2b884f 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.F6B 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57dda0057fc96b1b633235701a2b884f 1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57dda0057fc96b1b633235701a2b884f 1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57dda0057fc96b1b633235701a2b884f 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.461 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.F6B 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.F6B 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.F6B 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dbdd0f6578c774e62ce84696c4310b48 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.f0k 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dbdd0f6578c774e62ce84696c4310b48 1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dbdd0f6578c774e62ce84696c4310b48 1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dbdd0f6578c774e62ce84696c4310b48 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.f0k 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.f0k 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.f0k 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c971dd993d5c085d207a8818bd4441e642ab579f8a710d05 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YKu 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c971dd993d5c085d207a8818bd4441e642ab579f8a710d05 2 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c971dd993d5c085d207a8818bd4441e642ab579f8a710d05 2 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c971dd993d5c085d207a8818bd4441e642ab579f8a710d05 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YKu 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YKu 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YKu 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5d062e385b287319474df89712c8ebc 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1fz 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5d062e385b287319474df89712c8ebc 0 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5d062e385b287319474df89712c8ebc 0 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5d062e385b287319474df89712c8ebc 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1fz 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1fz 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1fz 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=017c286a974ac41bcf73e624444adde953768c9e00bf7c72e7a65474b8fec6a1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iln 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 017c286a974ac41bcf73e624444adde953768c9e00bf7c72e7a65474b8fec6a1 3 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 017c286a974ac41bcf73e624444adde953768c9e00bf7c72e7a65474b8fec6a1 3 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=017c286a974ac41bcf73e624444adde953768c9e00bf7c72e7a65474b8fec6a1 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iln 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iln 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.iln 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 950 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 950 ']' 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:43.719 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tmc 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RHd ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RHd 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.64Y 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3nN ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3nN 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.F6B 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.f0k ]] 00:31:43.976 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f0k 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YKu 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1fz ]] 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1fz 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.iln 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.977 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:44.234 02:00:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:45.604 Waiting for block devices as requested 00:31:45.604 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:45.604 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:45.604 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:45.604 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:45.604 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:45.604 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:45.861 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:45.861 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:45.861 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:45.861 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:46.119 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:46.119 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:46.119 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:46.119 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:46.375 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:46.375 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:46.375 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:46.939 No valid GPT data, bailing 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:31:46.939 00:31:46.939 Discovery Log Number of Records 2, Generation counter 2 00:31:46.939 =====Discovery Log Entry 0====== 00:31:46.939 trtype: tcp 00:31:46.939 adrfam: ipv4 00:31:46.939 subtype: current discovery subsystem 00:31:46.939 treq: not specified, sq flow control disable supported 00:31:46.939 portid: 1 00:31:46.939 trsvcid: 4420 00:31:46.939 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:46.939 traddr: 10.0.0.1 00:31:46.939 eflags: none 00:31:46.939 sectype: none 00:31:46.939 =====Discovery Log Entry 1====== 00:31:46.939 trtype: tcp 00:31:46.939 adrfam: ipv4 00:31:46.939 subtype: nvme subsystem 00:31:46.939 treq: not specified, sq flow control disable supported 00:31:46.939 portid: 1 00:31:46.939 trsvcid: 4420 00:31:46.939 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:46.939 traddr: 10.0.0.1 00:31:46.939 eflags: none 00:31:46.939 sectype: none 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 nvme0n1 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.939 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.197 nvme0n1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.197 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.454 nvme0n1 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:47.454 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.455 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 nvme0n1 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 nvme0n1 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.713 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.971 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.972 nvme0n1 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:47.972 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.229 nvme0n1 00:31:48.229 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.229 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.229 02:00:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.229 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.229 02:00:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.229 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.229 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.229 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.229 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.229 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.229 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.230 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.487 nvme0n1 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.487 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.488 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.745 nvme0n1 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:48.745 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.746 nvme0n1 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.746 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.003 nvme0n1 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.003 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.260 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.261 02:00:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.518 nvme0n1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.518 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 nvme0n1 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.775 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.032 nvme0n1 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.032 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.033 02:00:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.290 nvme0n1 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.290 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.547 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.547 nvme0n1 00:31:50.548 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:50.834 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:50.835 02:00:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.400 nvme0n1 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:51.400 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.401 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 nvme0n1 00:31:51.658 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.658 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.658 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.658 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.658 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.658 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:51.915 02:00:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.172 nvme0n1 00:31:52.172 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.172 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.172 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.172 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.172 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.172 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.429 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.430 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.003 nvme0n1 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.003 02:00:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.569 nvme0n1 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.569 02:00:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.510 nvme0n1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.511 02:00:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.446 nvme0n1 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.447 02:00:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.391 nvme0n1 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.391 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:56.392 02:00:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 nvme0n1 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:57.324 02:00:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 nvme0n1 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.257 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.515 nvme0n1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.515 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.773 nvme0n1 00:31:58.773 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.773 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.773 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.773 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 nvme0n1 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.774 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.032 nvme0n1 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.032 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.033 02:00:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.290 nvme0n1 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.290 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.291 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.548 nvme0n1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.548 nvme0n1 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.548 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 nvme0n1 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.806 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.064 nvme0n1 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.064 02:00:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.321 nvme0n1 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.321 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.578 nvme0n1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.578 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.835 nvme0n1 00:32:00.835 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:00.835 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.835 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:00.835 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.835 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.835 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.092 02:00:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.349 nvme0n1 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.349 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.606 nvme0n1 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.606 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.607 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.607 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.607 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.607 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.607 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.863 nvme0n1 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:01.863 02:00:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.428 nvme0n1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.428 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.993 nvme0n1 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:02.993 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.250 02:00:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.507 nvme0n1 00:32:03.507 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.765 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.766 02:00:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.340 nvme0n1 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.340 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.905 nvme0n1 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:04.905 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:04.906 02:00:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.836 nvme0n1 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.836 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:05.837 02:00:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.769 nvme0n1 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.769 02:00:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.748 nvme0n1 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.748 02:00:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.682 nvme0n1 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:08.682 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.683 02:00:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.617 nvme0n1 00:32:09.617 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.618 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.875 nvme0n1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.875 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.133 nvme0n1 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.133 02:00:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.391 nvme0n1 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.391 nvme0n1 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.391 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.392 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.650 nvme0n1 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.650 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.908 nvme0n1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.908 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.167 nvme0n1 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.167 02:00:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.425 nvme0n1 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.425 nvme0n1 00:32:11.425 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.426 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.426 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.426 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.426 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.426 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.684 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.685 nvme0n1 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:11.685 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.942 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.200 nvme0n1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.200 02:00:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.461 nvme0n1 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.461 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.719 nvme0n1 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.719 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.720 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.978 nvme0n1 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.978 02:00:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.236 nvme0n1 00:32:13.236 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.236 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.236 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.236 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.236 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.236 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.494 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.058 nvme0n1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.058 02:00:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.625 nvme0n1 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.625 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.191 nvme0n1 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.191 02:00:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.448 nvme0n1 00:32:15.448 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.448 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.448 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.448 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.448 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.448 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.704 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.269 nvme0n1 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTUxY2UxNTgwMWU4MmU2N2VjNDY0ZTE0ZDE1M2NkNDXh1Fno: 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTZkZDJkNmQ2NGZkODAzNjg5YzY4MTllMTdiODBhMjg5NzhmNTc3ZTM3YTA1OTE0ZGE3NjFiY2RlMDViZjYzNSrdd94=: 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:16.269 02:00:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.201 nvme0n1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.201 02:00:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.131 nvme0n1 00:32:18.131 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.131 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTdkZGEwMDU3ZmM5NmIxYjYzMzIzNTcwMWEyYjg4NGboUjwp: 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGJkZDBmNjU3OGM3NzRlNjJjZTg0Njk2YzQzMTBiNDhM0beT: 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.132 02:00:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.064 nvme0n1 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzk3MWRkOTkzZDVjMDg1ZDIwN2E4ODE4YmQ0NDQxZTY0MmFiNTc5ZjhhNzEwZDA14uKIxw==: 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: ]] 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMDYyZTM4NWIyODczMTk0NzRkZjg5NzEyYzhlYmNgnsvh: 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.064 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.065 02:00:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.009 nvme0n1 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDE3YzI4NmE5NzRhYzQxYmNmNzNlNjI0NDQ0YWRkZTk1Mzc2OGM5ZTAwYmY3YzcyZTdhNjU0NzRiOGZlYzZhMUzP4ls=: 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.009 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.010 02:00:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.940 nvme0n1 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTE2MWQ1MDY2MjE2NmNlYTExNjA3YWRkMWI0N2Q4MjQ0YTJkNTQxMjY5YmM1Mzc4UUGzTg==: 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjQzYzg4ZGM1MjcyM2FmMDJhZmZmODhlZTA4MzQ3ZjhmNzM3NGJkZTMwYWIzOWViqMQTQA==: 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.940 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.196 request: 00:32:21.196 { 00:32:21.196 "name": "nvme0", 00:32:21.196 "trtype": "tcp", 00:32:21.196 "traddr": "10.0.0.1", 00:32:21.196 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:21.196 "adrfam": "ipv4", 00:32:21.196 "trsvcid": "4420", 00:32:21.196 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:21.196 "method": "bdev_nvme_attach_controller", 00:32:21.196 "req_id": 1 00:32:21.196 } 00:32:21.196 Got JSON-RPC error response 00:32:21.196 response: 00:32:21.196 { 00:32:21.196 "code": -32602, 00:32:21.196 "message": "Invalid parameters" 00:32:21.196 } 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.196 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.197 request: 00:32:21.197 { 00:32:21.197 "name": "nvme0", 00:32:21.197 "trtype": "tcp", 00:32:21.197 "traddr": "10.0.0.1", 00:32:21.197 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:21.197 "adrfam": "ipv4", 00:32:21.197 "trsvcid": "4420", 00:32:21.197 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:21.197 "dhchap_key": "key2", 00:32:21.197 "method": "bdev_nvme_attach_controller", 00:32:21.197 "req_id": 1 00:32:21.197 } 00:32:21.197 Got JSON-RPC error response 00:32:21.197 response: 00:32:21.197 { 00:32:21.197 "code": -32602, 00:32:21.197 "message": "Invalid parameters" 00:32:21.197 } 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.197 02:00:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.197 request: 00:32:21.197 { 00:32:21.197 "name": "nvme0", 00:32:21.197 "trtype": "tcp", 00:32:21.197 "traddr": "10.0.0.1", 00:32:21.197 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:21.197 "adrfam": "ipv4", 00:32:21.197 "trsvcid": "4420", 00:32:21.197 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:21.197 "dhchap_key": "key1", 00:32:21.197 "dhchap_ctrlr_key": "ckey2", 00:32:21.197 "method": "bdev_nvme_attach_controller", 00:32:21.197 "req_id": 1 00:32:21.197 } 00:32:21.197 Got JSON-RPC error response 00:32:21.197 response: 00:32:21.197 { 00:32:21.197 "code": -32602, 00:32:21.197 "message": "Invalid parameters" 00:32:21.197 } 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.197 rmmod nvme_tcp 00:32:21.197 rmmod nvme_fabrics 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 950 ']' 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 950 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 950 ']' 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 950 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 950 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 950' 00:32:21.197 killing process with pid 950 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 950 00:32:21.197 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 950 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.454 02:00:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:24.002 02:00:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:24.936 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:24.936 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:24.936 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:25.869 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:26.127 02:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Tmc /tmp/spdk.key-null.64Y /tmp/spdk.key-sha256.F6B /tmp/spdk.key-sha384.YKu /tmp/spdk.key-sha512.iln /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:26.127 02:00:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:27.501 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:27.501 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:27.501 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:27.501 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:27.501 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:27.502 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:27.502 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:27.502 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:27.502 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:27.502 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:27.502 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:27.502 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:27.502 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:27.502 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:27.502 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:27.502 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:27.502 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:27.759 00:32:27.759 real 0m47.512s 00:32:27.759 user 0m44.487s 00:32:27.759 sys 0m6.501s 00:32:27.759 02:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:27.759 02:00:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.759 ************************************ 00:32:27.759 END TEST nvmf_auth_host 00:32:27.759 ************************************ 00:32:27.759 02:00:51 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:32:27.759 02:00:51 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:27.759 02:00:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:32:27.759 02:00:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:27.759 02:00:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.759 ************************************ 00:32:27.759 START TEST nvmf_digest 00:32:27.759 ************************************ 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:27.759 * Looking for test storage... 00:32:27.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.759 02:00:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:30.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:30.286 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:30.286 Found net devices under 0000:09:00.0: cvl_0_0 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:30.286 Found net devices under 0000:09:00.1: cvl_0_1 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:30.286 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:30.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:32:30.545 00:32:30.545 --- 10.0.0.2 ping statistics --- 00:32:30.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.545 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:32:30.545 00:32:30.545 --- 10.0.0.1 ping statistics --- 00:32:30.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.545 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:30.545 ************************************ 00:32:30.545 START TEST nvmf_digest_clean 00:32:30.545 ************************************ 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=11380 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 11380 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 11380 ']' 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:30.545 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.545 [2024-05-15 02:00:54.347473] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:30.545 [2024-05-15 02:00:54.347549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.545 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.545 [2024-05-15 02:00:54.422882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.802 [2024-05-15 02:00:54.517861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.802 [2024-05-15 02:00:54.517921] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.802 [2024-05-15 02:00:54.517947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.802 [2024-05-15 02:00:54.517961] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.802 [2024-05-15 02:00:54.517974] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.802 [2024-05-15 02:00:54.518003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.802 null0 00:32:30.802 [2024-05-15 02:00:54.678817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.802 [2024-05-15 02:00:54.702808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:30.802 [2024-05-15 02:00:54.703094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=11514 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 11514 /var/tmp/bperf.sock 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 11514 ']' 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:30.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:30.802 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:31.059 [2024-05-15 02:00:54.751112] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:31.059 [2024-05-15 02:00:54.751180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11514 ] 00:32:31.059 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.059 [2024-05-15 02:00:54.822375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.059 [2024-05-15 02:00:54.914603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.059 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:31.059 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:32:31.059 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:31.059 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:31.059 02:00:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:31.624 02:00:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.624 02:00:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.881 nvme0n1 00:32:31.881 02:00:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:31.881 02:00:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.881 Running I/O for 2 seconds... 00:32:34.410 00:32:34.410 Latency(us) 00:32:34.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.410 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:34.410 nvme0n1 : 2.05 18653.88 72.87 0.00 0.00 6715.44 3568.07 49321.91 00:32:34.410 =================================================================================================================== 00:32:34.410 Total : 18653.88 72.87 0.00 0.00 6715.44 3568.07 49321.91 00:32:34.410 0 00:32:34.410 02:00:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:34.410 02:00:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:34.410 02:00:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:34.410 02:00:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:34.410 02:00:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:34.410 | select(.opcode=="crc32c") 00:32:34.410 | "\(.module_name) \(.executed)"' 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 11514 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 11514 ']' 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 11514 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 11514 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 11514' 00:32:34.410 killing process with pid 11514 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 11514 00:32:34.410 Received shutdown signal, test time was about 2.000000 seconds 00:32:34.410 00:32:34.410 Latency(us) 00:32:34.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.410 =================================================================================================================== 00:32:34.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:34.410 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 11514 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=11928 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 11928 /var/tmp/bperf.sock 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 11928 ']' 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:34.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:34.668 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:34.668 [2024-05-15 02:00:58.452623] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:34.668 [2024-05-15 02:00:58.452700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11928 ] 00:32:34.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:34.668 Zero copy mechanism will not be used. 00:32:34.668 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.668 [2024-05-15 02:00:58.524920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.926 [2024-05-15 02:00:58.617767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.926 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:34.926 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:32:34.926 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:34.926 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:34.926 02:00:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:35.183 02:00:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.183 02:00:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.746 nvme0n1 00:32:35.746 02:00:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:35.746 02:00:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:35.747 Zero copy mechanism will not be used. 00:32:35.747 Running I/O for 2 seconds... 00:32:37.641 00:32:37.641 Latency(us) 00:32:37.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.641 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:37.641 nvme0n1 : 2.00 5277.76 659.72 0.00 0.00 3026.74 801.00 9806.13 00:32:37.641 =================================================================================================================== 00:32:37.641 Total : 5277.76 659.72 0.00 0.00 3026.74 801.00 9806.13 00:32:37.641 0 00:32:37.641 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:37.641 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:37.641 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:37.642 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:37.642 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:37.642 | select(.opcode=="crc32c") 00:32:37.642 | "\(.module_name) \(.executed)"' 00:32:37.898 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 11928 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 11928 ']' 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 11928 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 11928 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 11928' 00:32:37.899 killing process with pid 11928 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 11928 00:32:37.899 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.899 00:32:37.899 Latency(us) 00:32:37.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.899 =================================================================================================================== 00:32:37.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.899 02:01:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 11928 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=12329 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 12329 /var/tmp/bperf.sock 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 12329 ']' 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:38.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:38.155 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:38.155 [2024-05-15 02:01:02.066568] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:38.155 [2024-05-15 02:01:02.066655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12329 ] 00:32:38.415 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.415 [2024-05-15 02:01:02.139057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.415 [2024-05-15 02:01:02.226924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.415 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:38.415 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:32:38.415 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:38.415 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:38.415 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:38.981 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.981 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.239 nvme0n1 00:32:39.239 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:39.239 02:01:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:39.239 Running I/O for 2 seconds... 00:32:41.763 00:32:41.763 Latency(us) 00:32:41.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.763 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.763 nvme0n1 : 2.00 19289.38 75.35 0.00 0.00 6627.29 4296.25 12913.02 00:32:41.763 =================================================================================================================== 00:32:41.763 Total : 19289.38 75.35 0.00 0.00 6627.29 4296.25 12913.02 00:32:41.763 0 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:41.763 | select(.opcode=="crc32c") 00:32:41.763 | "\(.module_name) \(.executed)"' 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 12329 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 12329 ']' 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 12329 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 12329 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 12329' 00:32:41.763 killing process with pid 12329 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 12329 00:32:41.763 Received shutdown signal, test time was about 2.000000 seconds 00:32:41.763 00:32:41.763 Latency(us) 00:32:41.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.763 =================================================================================================================== 00:32:41.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 12329 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=12738 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 12738 /var/tmp/bperf.sock 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 12738 ']' 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:41.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:41.763 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:42.020 [2024-05-15 02:01:05.699575] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:42.020 [2024-05-15 02:01:05.699657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12738 ] 00:32:42.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:42.020 Zero copy mechanism will not be used. 00:32:42.020 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.020 [2024-05-15 02:01:05.766775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.020 [2024-05-15 02:01:05.853641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.020 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:42.020 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:32:42.020 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:42.020 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:42.020 02:01:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:42.586 02:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.586 02:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.847 nvme0n1 00:32:42.847 02:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:42.847 02:01:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:42.847 Zero copy mechanism will not be used. 00:32:42.847 Running I/O for 2 seconds... 00:32:45.403 00:32:45.403 Latency(us) 00:32:45.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.403 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:45.403 nvme0n1 : 2.00 5238.06 654.76 0.00 0.00 3045.77 1953.94 8592.50 00:32:45.403 =================================================================================================================== 00:32:45.403 Total : 5238.06 654.76 0.00 0.00 3045.77 1953.94 8592.50 00:32:45.403 0 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:45.403 | select(.opcode=="crc32c") 00:32:45.403 | "\(.module_name) \(.executed)"' 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 12738 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 12738 ']' 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 12738 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:45.403 02:01:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 12738 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 12738' 00:32:45.403 killing process with pid 12738 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 12738 00:32:45.403 Received shutdown signal, test time was about 2.000000 seconds 00:32:45.403 00:32:45.403 Latency(us) 00:32:45.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.403 =================================================================================================================== 00:32:45.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 12738 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 11380 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 11380 ']' 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 11380 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 11380 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 11380' 00:32:45.403 killing process with pid 11380 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 11380 00:32:45.403 [2024-05-15 02:01:09.270003] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:45.403 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 11380 00:32:45.661 00:32:45.661 real 0m15.189s 00:32:45.661 user 0m29.882s 00:32:45.661 sys 0m4.290s 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:45.661 ************************************ 00:32:45.661 END TEST nvmf_digest_clean 00:32:45.661 ************************************ 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:45.661 ************************************ 00:32:45.661 START TEST nvmf_digest_error 00:32:45.661 ************************************ 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=13294 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 13294 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 13294 ']' 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:45.661 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.919 [2024-05-15 02:01:09.596522] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:45.919 [2024-05-15 02:01:09.596612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.919 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.919 [2024-05-15 02:01:09.682190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.919 [2024-05-15 02:01:09.773246] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.919 [2024-05-15 02:01:09.773308] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.919 [2024-05-15 02:01:09.773324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.919 [2024-05-15 02:01:09.773337] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.919 [2024-05-15 02:01:09.773349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.919 [2024-05-15 02:01:09.773379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.919 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:45.919 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:32:45.919 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:45.919 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:45.919 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.177 [2024-05-15 02:01:09.870042] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.177 02:01:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.177 null0 00:32:46.177 [2024-05-15 02:01:09.978953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.177 [2024-05-15 02:01:10.002947] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:46.177 [2024-05-15 02:01:10.003255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=13319 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 13319 /var/tmp/bperf.sock 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 13319 ']' 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:46.177 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.177 [2024-05-15 02:01:10.050377] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:46.177 [2024-05-15 02:01:10.050467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13319 ] 00:32:46.177 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.435 [2024-05-15 02:01:10.121314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.435 [2024-05-15 02:01:10.209382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.435 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:46.435 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:32:46.435 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:46.435 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:46.692 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:46.692 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.692 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.692 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.692 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.692 02:01:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:47.256 nvme0n1 00:32:47.256 02:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:47.256 02:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:47.256 02:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:47.256 02:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:47.256 02:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:47.256 02:01:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:47.256 Running I/O for 2 seconds... 00:32:47.256 [2024-05-15 02:01:11.187824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.256 [2024-05-15 02:01:11.188021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.256 [2024-05-15 02:01:11.188049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.201045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.201083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.201103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.218366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.218415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.232308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.232341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.232358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.245996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.246046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.246066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.260777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.260809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.260827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.277553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.277584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.277601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.290975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.515 [2024-05-15 02:01:11.291011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.515 [2024-05-15 02:01:11.291030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.515 [2024-05-15 02:01:11.309510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.309562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.309582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.326682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.326713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.326737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.340504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.340553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.356293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.356324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.356340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.375088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.375124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.375143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.392387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.392420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.392437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.404866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.404903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.404923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.421107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.421144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.421163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.516 [2024-05-15 02:01:11.436835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.516 [2024-05-15 02:01:11.436868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.516 [2024-05-15 02:01:11.436884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.449886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.449918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.449935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.467852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.467898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.467915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.482356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.482388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.482404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.498408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.498441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.498458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.514184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.514326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.514353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.527132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.527169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.527189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.546494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.546525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.546557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.563224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.563283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.563306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.576883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.576914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.576930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.590776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.590812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.590831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.608329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.608361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.626184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.626228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.626251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.640013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.640042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.640058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.653412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.653442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.653458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.670474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.670519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.670536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.687282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.687327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.774 [2024-05-15 02:01:11.700112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:47.774 [2024-05-15 02:01:11.700147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.774 [2024-05-15 02:01:11.700167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.715145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.715181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.715201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.733912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.733948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.733974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.746798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.746834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.746854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.766248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.766283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.766303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.780074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.780104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.780120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.798103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.798140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.798159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.815131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.815167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.815186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.832651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.832682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.832700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.845589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.845624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.845643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.864501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.864537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.864557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.883468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.883505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.883537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.900929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.900961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.900978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.913717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.913753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.913773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.930560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.930591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.930608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.943472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.943502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.943534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.032 [2024-05-15 02:01:11.959780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.032 [2024-05-15 02:01:11.959816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.032 [2024-05-15 02:01:11.959835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.290 [2024-05-15 02:01:11.976674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.290 [2024-05-15 02:01:11.976706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.290 [2024-05-15 02:01:11.976746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.290 [2024-05-15 02:01:11.989236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.290 [2024-05-15 02:01:11.989285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.290 [2024-05-15 02:01:11.989302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.290 [2024-05-15 02:01:12.003966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.290 [2024-05-15 02:01:12.004002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.290 [2024-05-15 02:01:12.004022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.290 [2024-05-15 02:01:12.016840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.016876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.016895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.033663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.033694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.033710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.049894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.049930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.049949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.061966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.062013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.062031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.075994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.076030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.076050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.090201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.090240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.090258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.103665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.103701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.103720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.117871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.117908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.117927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.132024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.132066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.132086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.146095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.146130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.146150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.160135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.160170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.160190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.178854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.178890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.178916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.195401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.195432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.195450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.291 [2024-05-15 02:01:12.207928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.291 [2024-05-15 02:01:12.207965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.291 [2024-05-15 02:01:12.207985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.224326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.224372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.224390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.242193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.242242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.242259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.260083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.260119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.260138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.273054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.273090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.290781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.290818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.290838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.307141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.307172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.307190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.325432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.325463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.325486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.339101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.339130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.339149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.356749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.356785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.356805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.373956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.374001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.374019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.391988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.392019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.392037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.404757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.404794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.404824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.421390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.421419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.421437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.437823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.437880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.455880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.455912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.455932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.549 [2024-05-15 02:01:12.472284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.549 [2024-05-15 02:01:12.472314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.549 [2024-05-15 02:01:12.472332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.486123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.486160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.486179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.501854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.501890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.501909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.517356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.517387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.517466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.530607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.530641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.530661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.546747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.546789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.546809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.559709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.559739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.559756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.574008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.574037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.574056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.586972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.587008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.587028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.601097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.601133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.601153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.619199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.619271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.619290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.636449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.636482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.648670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.648727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.666986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.667020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.667053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.678648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.678677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.678693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.695031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.695086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.695104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.708615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.708652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.708671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.722533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.722569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.722614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:48.807 [2024-05-15 02:01:12.736401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:48.807 [2024-05-15 02:01:12.736432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:48.807 [2024-05-15 02:01:12.736450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.753772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.753808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.753827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.771442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.771473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.771490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.784020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.784056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.784076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.799392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.799424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.799446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.815999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.816030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.816046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.829717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.829753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.829773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.846963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.846999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.847019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.861424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.861456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.861499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.874804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.874841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.874861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.893234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.893284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.893301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.910796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.910832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.910852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.928536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.928568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.928585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.942069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.942106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.942125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.957819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.957856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.957876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.975345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.975377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.975395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.065 [2024-05-15 02:01:12.992563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.065 [2024-05-15 02:01:12.992599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.065 [2024-05-15 02:01:12.992617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.010324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.010356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.010373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.028285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.028318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.028335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.042569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.042599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.057716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.057762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.057778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.071936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.071967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.071989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.088945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.088982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.089002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.101665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.101702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.101721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.120444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.120481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.120501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.137691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.137728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.137748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.156385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.156421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.156441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 [2024-05-15 02:01:13.168562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2200420) 00:32:49.323 [2024-05-15 02:01:13.168615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-05-15 02:01:13.168635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:49.323 00:32:49.323 Latency(us) 00:32:49.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.323 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:49.323 nvme0n1 : 2.04 15982.93 62.43 0.00 0.00 7838.66 4004.98 45826.65 00:32:49.323 =================================================================================================================== 00:32:49.323 Total : 15982.93 62.43 0.00 0.00 7838.66 4004.98 45826.65 00:32:49.323 0 00:32:49.323 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:49.323 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:49.324 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:49.324 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:49.324 | .driver_specific 00:32:49.324 | .nvme_error 00:32:49.324 | .status_code 00:32:49.324 | .command_transient_transport_error' 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 128 > 0 )) 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 13319 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 13319 ']' 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 13319 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 13319 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 13319' 00:32:49.581 killing process with pid 13319 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 13319 00:32:49.581 Received shutdown signal, test time was about 2.000000 seconds 00:32:49.581 00:32:49.581 Latency(us) 00:32:49.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.581 =================================================================================================================== 00:32:49.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.581 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 13319 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=13741 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 13741 /var/tmp/bperf.sock 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 13741 ']' 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:49.838 02:01:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.095 [2024-05-15 02:01:13.777605] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:50.095 [2024-05-15 02:01:13.777700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13741 ] 00:32:50.095 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:50.095 Zero copy mechanism will not be used. 00:32:50.095 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.095 [2024-05-15 02:01:13.850708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.095 [2024-05-15 02:01:13.936220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.352 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.917 nvme0n1 00:32:50.917 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:50.918 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.918 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.918 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.918 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:50.918 02:01:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:50.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:50.918 Zero copy mechanism will not be used. 00:32:50.918 Running I/O for 2 seconds... 00:32:50.918 [2024-05-15 02:01:14.806744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.806802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.806826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.812855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.812893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.812912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.819078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.819111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.819130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.825069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.825113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.825133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.831154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.831188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.831207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.837103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.837137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.837155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.842992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.843026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.843044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:50.918 [2024-05-15 02:01:14.849092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:50.918 [2024-05-15 02:01:14.849125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.918 [2024-05-15 02:01:14.849144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.855276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.855306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.855323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.861400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.861431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.861448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.867167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.867203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.867232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.873271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.873302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.873326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.879312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.879344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.879361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.885297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.885345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.891254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.891302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.891319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.897277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.897305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.897321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.903334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.903378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.909479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.909507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.909522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.915464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.915508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.915524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.921478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.921506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.921523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.927427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.927462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.927495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.933405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.933434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.933451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.939414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.939445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.939461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.945124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.945157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.945176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.950996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.951028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.177 [2024-05-15 02:01:14.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.177 [2024-05-15 02:01:14.956875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.177 [2024-05-15 02:01:14.956907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.956925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.962893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.962926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.962944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.968892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.968925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.968944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.974824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.974857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.974876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.980772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.980808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.980828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.986774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.986807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.986826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.992779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.992813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.992832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:14.998669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:14.998703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:14.998722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.004589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.004623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.004642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.010563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.010596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.010614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.017115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.017148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.017167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.024968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.025003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.025023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.032957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.032992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.033018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.041018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.041053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.048003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.048038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.048058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.055898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.055933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.055952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.063569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.063601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.063619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.071511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.071544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.071561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.076514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.076564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.082837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.082867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.082884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.090628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.090658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.090674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.098373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.098423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.098440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.178 [2024-05-15 02:01:15.105323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.178 [2024-05-15 02:01:15.105355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.178 [2024-05-15 02:01:15.105373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.113354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.113385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.113403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.121367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.121398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.121415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.129522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.129571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.129590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.137261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.137292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.137310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.143917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.143951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.143971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.151315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.151346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.151364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.158079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.158113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.158131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.165329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.165359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.165375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.173480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.173529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.173547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.181085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.181120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.181139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.187902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.187937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.187957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.195298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.195329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.195347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.203561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.203596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.203615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.211119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.211153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.211172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.217748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.217781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.217801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.224880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.224913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.224938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.232838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.232872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.232891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.240386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.240417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.240435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.248004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.248038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.248057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.254051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.254085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.254103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.260203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.260258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.260276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.266273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.266302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.266319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.272232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.272277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.272294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.278189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.278230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.278266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.284225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.284278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.438 [2024-05-15 02:01:15.284296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.438 [2024-05-15 02:01:15.290208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.438 [2024-05-15 02:01:15.290261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.290279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.296278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.296308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.296339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.302334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.302363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.302380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.308351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.308380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.308397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.314387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.314417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.314433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.320509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.320556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.320574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.326401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.326431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.326448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.332380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.332410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.332427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.338341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.338371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.338388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.344328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.344358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.344374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.350367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.350397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.350413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.356448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.356494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.439 [2024-05-15 02:01:15.362581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.439 [2024-05-15 02:01:15.362616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.439 [2024-05-15 02:01:15.362635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.368618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.368652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.368670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.374708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.374743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.374763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.380779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.380812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.380831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.386746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.386784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.386804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.392714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.392747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.392765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.398672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.398705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.398723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.404694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.404727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.404745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.410618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.410651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.410670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.416595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.416627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.416645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.422535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.422568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.422586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.428439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.428469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.428487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.434387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.434417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.434434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.698 [2024-05-15 02:01:15.440480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.698 [2024-05-15 02:01:15.440509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.698 [2024-05-15 02:01:15.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.446522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.446570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.452448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.452477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.452509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.458408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.458438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.458454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.464310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.464340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.464357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.470231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.470276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.470293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.476440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.476472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.476489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.482442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.482472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.482489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.488525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.488574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.488600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.494652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.494686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.494705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.500653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.500686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.500705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.506600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.506632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.506651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.512535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.512569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.512587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.518723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.518757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.518776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.524718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.524752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.524770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.530807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.530840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.530858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.536836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.536869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.536889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.543001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.543041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.543061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.548989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.549022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.549041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.555070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.555104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.555123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.560923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.560956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.560975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.566841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.566874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.566893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.573026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.573059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.573078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.579093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.579126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.579145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.585065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.585115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.591120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.591152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.591171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.597175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.597207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.597250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.603264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.603293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.699 [2024-05-15 02:01:15.603310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.699 [2024-05-15 02:01:15.609349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.699 [2024-05-15 02:01:15.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.700 [2024-05-15 02:01:15.609396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.700 [2024-05-15 02:01:15.615389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.700 [2024-05-15 02:01:15.615418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.700 [2024-05-15 02:01:15.615436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.700 [2024-05-15 02:01:15.621431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.700 [2024-05-15 02:01:15.621460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.700 [2024-05-15 02:01:15.621477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.700 [2024-05-15 02:01:15.627486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.700 [2024-05-15 02:01:15.627516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.700 [2024-05-15 02:01:15.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.633539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.633571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.633590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.639518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.639569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.639588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.645477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.645507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.645529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.651467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.651513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.651532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.657317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.657346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.657362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.662596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.662625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.662642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.668576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.668609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.668628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.674588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.674621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.674640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.680620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.680653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.680672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.687080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.687115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.687134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.693170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.693203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.693231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.699203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.699244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.699287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.705276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.705305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.705322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.711323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.711362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.958 [2024-05-15 02:01:15.711378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.958 [2024-05-15 02:01:15.717986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.958 [2024-05-15 02:01:15.718020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.718039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.726037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.726072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.726092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.733927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.733961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.733980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.741456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.741487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.741504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.747803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.747836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.747855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.753775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.753809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.759967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.760001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.760020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.766652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.766686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.766706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.773467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.773499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.773517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.781376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.781409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.788860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.788896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.788915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.796391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.796431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.796448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.803542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.803592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.803611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.807598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.807629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.807646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.814203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.814262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.814280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.821564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.821597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.821617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.828973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.829027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.836155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.836187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.836205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.842652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.842697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.842715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.849873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.849905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.849923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.857586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.857633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.857650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.865613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.865659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.865676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.873404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.873436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.873453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.881081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.881113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.881130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.959 [2024-05-15 02:01:15.888005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:51.959 [2024-05-15 02:01:15.888037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.959 [2024-05-15 02:01:15.888054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.895860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.895906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.895923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.903189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.903228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.903266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.908299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.908329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.908345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.913850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.913881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.913897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.920118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.920148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.920165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.927548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.927597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.935388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.935420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.935444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.944258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.944290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.944307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.952285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.952317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.952335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.960511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.960543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.960560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.968532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.968564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.968582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.976873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.976905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.976922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.985751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.985783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.985800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:15.993951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:15.993983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:15.994001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:16.001773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:16.001805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:16.001822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:16.006911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:16.006949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:16.006967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:16.013801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:16.013833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:16.013865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:16.022034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:16.022066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:16.022083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:16.030152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:16.030200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:16.030224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.219 [2024-05-15 02:01:16.038308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.219 [2024-05-15 02:01:16.038341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.219 [2024-05-15 02:01:16.038358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.045005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.045036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.045054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.050059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.050090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.050106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.055554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.055583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.055600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.060986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.061016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.061032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.066415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.066445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.066462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.072014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.072043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.077931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.077962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.077993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.083474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.083503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.083520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.088892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.088922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.088939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.094484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.094514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.094531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.100042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.100072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.100089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.106289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.106321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.106338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.113687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.113718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.113743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.121279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.121310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.121327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.128378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.128411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.128429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.136094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.136126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.136144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.220 [2024-05-15 02:01:16.143727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.220 [2024-05-15 02:01:16.143759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.220 [2024-05-15 02:01:16.143777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.151510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.151541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.151558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.159170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.159204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.159231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.166465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.166497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.166515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.174410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.174442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.174459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.182355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.182387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.182404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.189972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.190003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.190020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.197545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.197577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.197595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.204963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.204995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.205012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.212765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.212797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.212814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.220604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.220635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.220652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.227924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.227973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.235773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.235805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.235836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.243750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.243781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.243807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.251664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.251710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.251727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.259372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.259403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.259420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.266454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.266486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.266503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.273969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.274001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.274018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.282204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.282242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.282260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.289889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.289930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.289948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.297182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.297221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.297241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.304111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.304143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.304161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.311773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.311812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.311844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.319568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.319600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.319618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.479 [2024-05-15 02:01:16.327633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.479 [2024-05-15 02:01:16.327666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.479 [2024-05-15 02:01:16.327683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.336034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.336066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.336083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.344397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.344429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.344447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.351615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.351647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.351682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.358607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.358656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.364772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.364804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.370838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.370868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.370886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.376528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.376559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.376576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.382407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.382437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.382454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.387684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.387715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.387732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.393321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.393350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.393367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.399006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.399036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.399053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.404556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.404586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.404603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.480 [2024-05-15 02:01:16.408301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.480 [2024-05-15 02:01:16.408332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.480 [2024-05-15 02:01:16.408349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.414368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.414398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.414415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.420024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.420054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.420079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.425573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.425620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.431002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.431030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.431047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.436345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.436375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.436392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.441981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.442010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.442027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.447444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.447473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.447490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.452812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.452842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.739 [2024-05-15 02:01:16.452859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.739 [2024-05-15 02:01:16.458157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.739 [2024-05-15 02:01:16.458186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.458223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.463431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.463460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.463477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.468819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.468855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.468873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.474305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.474335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.474351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.479860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.479889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.479906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.485326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.485357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.485373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.490852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.490882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.490914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.496424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.496454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.496470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.501971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.502001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.502018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.507416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.507445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.507463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.512650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.512694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.512717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.517782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.517811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.517828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.523277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.523308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.523326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.528852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.528882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.528914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.534492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.534521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.534538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.540200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.540241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.540260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.545691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.545721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.545753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.551274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.551305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.551321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.556711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.556741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.556757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.562275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.562316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.562334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.567949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.567980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.567997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.573439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.573468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.573485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.579072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.579116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.579133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.584695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.584726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.584742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.590196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.590232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.590251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.595562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.595592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.595609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.600989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.601018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.601034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.606307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.740 [2024-05-15 02:01:16.606337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.740 [2024-05-15 02:01:16.606354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.740 [2024-05-15 02:01:16.611817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.611861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.611877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.617142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.617172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.617189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.622473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.622517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.622534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.627864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.627908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.627924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.633481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.633511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.633528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.639051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.639080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.639096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.644502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.644532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.644565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.650223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.650253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.650269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.655821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.655850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.655888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.661255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.661285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.661301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.741 [2024-05-15 02:01:16.666775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:52.741 [2024-05-15 02:01:16.666804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.741 [2024-05-15 02:01:16.666821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.672136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.672179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.672195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.677537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.677565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.677581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.682988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.683033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.683049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.688555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.688583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.688599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.693713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.693743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.693759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.699069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.699099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.699116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.704757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.704787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.704804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.710003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.710032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.710064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.715759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.715788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.715821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.721231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.721260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.721276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.726703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.726746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.726762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.732006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.732035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.732051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.737645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.737675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.737693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.743815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.743845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.743863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.750667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.750699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.750723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.756409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.756440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.756457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.762004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.762034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.762051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.767642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.767687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.767704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.773226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.773266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.773283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.778890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.778936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.778952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.784662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.784692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.784710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.790280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.790324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.790341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.795924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.795954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.795986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.000 [2024-05-15 02:01:16.801639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22521f0) 00:32:53.000 [2024-05-15 02:01:16.801677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.000 [2024-05-15 02:01:16.801695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.000 00:32:53.000 Latency(us) 00:32:53.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.000 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:53.000 nvme0n1 : 2.00 4875.35 609.42 0.00 0.00 3277.16 734.25 8980.86 00:32:53.001 =================================================================================================================== 00:32:53.001 Total : 4875.35 609.42 0.00 0.00 3277.16 734.25 8980.86 00:32:53.001 0 00:32:53.001 02:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:53.001 02:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:53.001 02:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:53.001 02:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:53.001 | .driver_specific 00:32:53.001 | .nvme_error 00:32:53.001 | .status_code 00:32:53.001 | .command_transient_transport_error' 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 314 > 0 )) 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 13741 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 13741 ']' 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 13741 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 13741 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 13741' 00:32:53.259 killing process with pid 13741 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 13741 00:32:53.259 Received shutdown signal, test time was about 2.000000 seconds 00:32:53.259 00:32:53.259 Latency(us) 00:32:53.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.259 =================================================================================================================== 00:32:53.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.259 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 13741 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=14244 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 14244 /var/tmp/bperf.sock 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 14244 ']' 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:53.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:53.517 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:53.517 [2024-05-15 02:01:17.356859] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:53.517 [2024-05-15 02:01:17.356949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14244 ] 00:32:53.517 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.517 [2024-05-15 02:01:17.429573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.775 [2024-05-15 02:01:17.517842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.775 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:53.775 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:32:53.775 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:53.775 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:54.033 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:54.033 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:54.033 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.033 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:54.033 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:54.033 02:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:54.598 nvme0n1 00:32:54.598 02:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:54.598 02:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:54.598 02:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.598 02:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:54.598 02:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:54.598 02:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:54.598 Running I/O for 2 seconds... 00:32:54.599 [2024-05-15 02:01:18.392067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ee5c8 00:32:54.599 [2024-05-15 02:01:18.393101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.393156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.405697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fb480 00:32:54.599 [2024-05-15 02:01:18.406883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.406928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.417972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e2c28 00:32:54.599 [2024-05-15 02:01:18.419131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.419165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.432355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f2d80 00:32:54.599 [2024-05-15 02:01:18.433643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.433676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.446831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f1ca0 00:32:54.599 [2024-05-15 02:01:18.448859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.448891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.455851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f9f68 00:32:54.599 [2024-05-15 02:01:18.456676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.456709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.467889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ed0b0 00:32:54.599 [2024-05-15 02:01:18.468702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.468746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.481774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e7818 00:32:54.599 [2024-05-15 02:01:18.482747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.482780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.494926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190eb328 00:32:54.599 [2024-05-15 02:01:18.496060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.496091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.506939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fcdd0 00:32:54.599 [2024-05-15 02:01:18.508090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.508122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:54.599 [2024-05-15 02:01:18.521081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190de8a8 00:32:54.599 [2024-05-15 02:01:18.522456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.599 [2024-05-15 02:01:18.522501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.534404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8618 00:32:54.857 [2024-05-15 02:01:18.535883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.535914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.546358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f6458 00:32:54.857 [2024-05-15 02:01:18.547846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.547879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.558361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f9f68 00:32:54.857 [2024-05-15 02:01:18.559356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.559384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.571212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f20d8 00:32:54.857 [2024-05-15 02:01:18.572036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.572069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.584546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ed920 00:32:54.857 [2024-05-15 02:01:18.585532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.585564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.596584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190dfdc0 00:32:54.857 [2024-05-15 02:01:18.598409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.598439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.608328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ee190 00:32:54.857 [2024-05-15 02:01:18.609127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.609160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.621589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6300 00:32:54.857 [2024-05-15 02:01:18.622545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.622591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.634884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190eb760 00:32:54.857 [2024-05-15 02:01:18.636040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.636072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.646801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190de8a8 00:32:54.857 [2024-05-15 02:01:18.647953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.647985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.660139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fb480 00:32:54.857 [2024-05-15 02:01:18.661467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.661510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.672054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f46d0 00:32:54.857 [2024-05-15 02:01:18.672839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.672871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.684894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ea248 00:32:54.857 [2024-05-15 02:01:18.685591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.685623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.699519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e5658 00:32:54.857 [2024-05-15 02:01:18.701172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.701204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.711415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e4578 00:32:54.857 [2024-05-15 02:01:18.712559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.712590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.724297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190df118 00:32:54.857 [2024-05-15 02:01:18.725292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.725326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.737267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190de470 00:32:54.857 [2024-05-15 02:01:18.738567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.738600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.751570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e7c50 00:32:54.857 [2024-05-15 02:01:18.753616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.753648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.760526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ec840 00:32:54.857 [2024-05-15 02:01:18.761373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.761414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.773356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e38d0 00:32:54.857 [2024-05-15 02:01:18.774151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.774181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:54.857 [2024-05-15 02:01:18.786392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6fa8 00:32:54.857 [2024-05-15 02:01:18.787439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.857 [2024-05-15 02:01:18.787482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.798452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5378 00:32:55.116 [2024-05-15 02:01:18.799433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.799475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.811740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f7538 00:32:55.116 [2024-05-15 02:01:18.812879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.812910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.825909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e4140 00:32:55.116 [2024-05-15 02:01:18.827288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.827315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.839017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f0bc0 00:32:55.116 [2024-05-15 02:01:18.840522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.840554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.849832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ea248 00:32:55.116 [2024-05-15 02:01:18.850507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.850538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.863081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f9f68 00:32:55.116 [2024-05-15 02:01:18.863896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.863927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.876371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8a50 00:32:55.116 [2024-05-15 02:01:18.877390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.877418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.890939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f4b08 00:32:55.116 [2024-05-15 02:01:18.892919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.892950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:55.116 [2024-05-15 02:01:18.899929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fc128 00:32:55.116 [2024-05-15 02:01:18.900765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.116 [2024-05-15 02:01:18.900796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.911979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f7da8 00:32:55.117 [2024-05-15 02:01:18.912787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.912818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.926058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e0a68 00:32:55.117 [2024-05-15 02:01:18.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.927086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.939183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e1710 00:32:55.117 [2024-05-15 02:01:18.940374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.940417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.951208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6738 00:32:55.117 [2024-05-15 02:01:18.952379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.952422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.964430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e95a0 00:32:55.117 [2024-05-15 02:01:18.965733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.965764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.976332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f20d8 00:32:55.117 [2024-05-15 02:01:18.977106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.977137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:18.989103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ea248 00:32:55.117 [2024-05-15 02:01:18.989760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:18.989791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:19.002430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6300 00:32:55.117 [2024-05-15 02:01:19.003267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:19.003296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:19.016915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e8d30 00:32:55.117 [2024-05-15 02:01:19.018721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:19.018753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:19.028763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ef6a8 00:32:55.117 [2024-05-15 02:01:19.030067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:19.030098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:55.117 [2024-05-15 02:01:19.040264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e5220 00:32:55.117 [2024-05-15 02:01:19.042073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.117 [2024-05-15 02:01:19.042105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:55.375 [2024-05-15 02:01:19.051304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ecc78 00:32:55.375 [2024-05-15 02:01:19.052094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.375 [2024-05-15 02:01:19.052131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:55.375 [2024-05-15 02:01:19.065478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ebb98 00:32:55.375 [2024-05-15 02:01:19.066477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.375 [2024-05-15 02:01:19.066522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:55.375 [2024-05-15 02:01:19.078604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5378 00:32:55.375 [2024-05-15 02:01:19.079742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.375 [2024-05-15 02:01:19.079776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:55.375 [2024-05-15 02:01:19.091487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f1430 00:32:55.375 [2024-05-15 02:01:19.092642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.375 [2024-05-15 02:01:19.092676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.375 [2024-05-15 02:01:19.104573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fa3a0 00:32:55.375 [2024-05-15 02:01:19.105940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.105973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.116619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f2d80 00:32:55.376 [2024-05-15 02:01:19.117927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.117959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.129901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5be8 00:32:55.376 [2024-05-15 02:01:19.131384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.131426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.143294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e4140 00:32:55.376 [2024-05-15 02:01:19.144902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.144935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.155045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fef90 00:32:55.376 [2024-05-15 02:01:19.156212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.156251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.166711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f35f0 00:32:55.376 [2024-05-15 02:01:19.167835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.167874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.180866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190df988 00:32:55.376 [2024-05-15 02:01:19.182224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.182271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.193930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fef90 00:32:55.376 [2024-05-15 02:01:19.195407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.195449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.205846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f9b30 00:32:55.376 [2024-05-15 02:01:19.207372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.207415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.219154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fda78 00:32:55.376 [2024-05-15 02:01:19.220797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.220829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.230971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fac10 00:32:55.376 [2024-05-15 02:01:19.232102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.232135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.243765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e0630 00:32:55.376 [2024-05-15 02:01:19.244756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.244788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.258310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f31b8 00:32:55.376 [2024-05-15 02:01:19.260316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.260359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.267381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6738 00:32:55.376 [2024-05-15 02:01:19.268170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.268202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.281654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f46d0 00:32:55.376 [2024-05-15 02:01:19.283713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.283745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.295703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190eee38 00:32:55.376 [2024-05-15 02:01:19.297371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.376 [2024-05-15 02:01:19.297413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.376 [2024-05-15 02:01:19.306135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190efae0 00:32:55.635 [2024-05-15 02:01:19.307116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.307149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.319318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6fa8 00:32:55.635 [2024-05-15 02:01:19.320418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.320462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.331237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fda78 00:32:55.635 [2024-05-15 02:01:19.332396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.332439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.345361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190eea00 00:32:55.635 [2024-05-15 02:01:19.346637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.346668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.357170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ef270 00:32:55.635 [2024-05-15 02:01:19.358436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.358477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.370408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f2d80 00:32:55.635 [2024-05-15 02:01:19.371839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.371871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.383655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fdeb0 00:32:55.635 [2024-05-15 02:01:19.385295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.385337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.396907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f3e60 00:32:55.635 [2024-05-15 02:01:19.398679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.398712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.410117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e2c28 00:32:55.635 [2024-05-15 02:01:19.412131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.412163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.423468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f46d0 00:32:55.635 [2024-05-15 02:01:19.425639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.425671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.432569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e0a68 00:32:55.635 [2024-05-15 02:01:19.433533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.433564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.445857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e5a90 00:32:55.635 [2024-05-15 02:01:19.446969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.635 [2024-05-15 02:01:19.447012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:55.635 [2024-05-15 02:01:19.457856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f2510 00:32:55.636 [2024-05-15 02:01:19.458976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.459008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.472008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ff3c8 00:32:55.636 [2024-05-15 02:01:19.473380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.473422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.485096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fcdd0 00:32:55.636 [2024-05-15 02:01:19.486539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.486586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.498389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5378 00:32:55.636 [2024-05-15 02:01:19.500003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.500041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.510347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fbcf0 00:32:55.636 [2024-05-15 02:01:19.511925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.511956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.522111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5378 00:32:55.636 [2024-05-15 02:01:19.523265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.523292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.534942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8e88 00:32:55.636 [2024-05-15 02:01:19.535903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.548201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e9168 00:32:55.636 [2024-05-15 02:01:19.549401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.549430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.636 [2024-05-15 02:01:19.561139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f31b8 00:32:55.636 [2024-05-15 02:01:19.562703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.636 [2024-05-15 02:01:19.562738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.894 [2024-05-15 02:01:19.574280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ebfd0 00:32:55.894 [2024-05-15 02:01:19.575921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.894 [2024-05-15 02:01:19.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:55.894 [2024-05-15 02:01:19.584716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fda78 00:32:55.895 [2024-05-15 02:01:19.585650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.585681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.597837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f1430 00:32:55.895 [2024-05-15 02:01:19.598885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.598917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.609818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6738 00:32:55.895 [2024-05-15 02:01:19.610906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.610937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.624024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e7818 00:32:55.895 [2024-05-15 02:01:19.625337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.625379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.637103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f7970 00:32:55.895 [2024-05-15 02:01:19.638561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.638592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.651500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e3060 00:32:55.895 [2024-05-15 02:01:19.653568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.653599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.660441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190eb760 00:32:55.895 [2024-05-15 02:01:19.661352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.661380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.672414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f9b30 00:32:55.895 [2024-05-15 02:01:19.673256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.673299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.686569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e38d0 00:32:55.895 [2024-05-15 02:01:19.687699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.687731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.699697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e3d08 00:32:55.895 [2024-05-15 02:01:19.700926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.700957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.714132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f7100 00:32:55.895 [2024-05-15 02:01:19.716071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.716102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.725896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ec408 00:32:55.895 [2024-05-15 02:01:19.727318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.727359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.737406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8e88 00:32:55.895 [2024-05-15 02:01:19.738752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.738782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.749229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f6cc8 00:32:55.895 [2024-05-15 02:01:19.750089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.750119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.760767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fa3a0 00:32:55.895 [2024-05-15 02:01:19.761578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.761609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.774045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e27f0 00:32:55.895 [2024-05-15 02:01:19.775054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.775085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.787242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e0a68 00:32:55.895 [2024-05-15 02:01:19.788446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.788477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.800476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f0788 00:32:55.895 [2024-05-15 02:01:19.801825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.801857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.812295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fc128 00:32:55.895 [2024-05-15 02:01:19.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.895 [2024-05-15 02:01:19.813173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:55.895 [2024-05-15 02:01:19.825122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e3060 00:32:55.896 [2024-05-15 02:01:19.825837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:55.896 [2024-05-15 02:01:19.825873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:56.154 [2024-05-15 02:01:19.839759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e4140 00:32:56.154 [2024-05-15 02:01:19.841444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.154 [2024-05-15 02:01:19.841486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:56.154 [2024-05-15 02:01:19.852960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e6738 00:32:56.154 [2024-05-15 02:01:19.854853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.154 [2024-05-15 02:01:19.854883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:56.154 [2024-05-15 02:01:19.866197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e1710 00:32:56.154 [2024-05-15 02:01:19.868246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.154 [2024-05-15 02:01:19.868291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:56.154 [2024-05-15 02:01:19.875192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e12d8 00:32:56.155 [2024-05-15 02:01:19.876043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.876074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.887143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f35f0 00:32:56.155 [2024-05-15 02:01:19.888045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.888076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.901313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5378 00:32:56.155 [2024-05-15 02:01:19.902401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.902444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.915597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f4b08 00:32:56.155 [2024-05-15 02:01:19.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.917390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.927496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ea248 00:32:56.155 [2024-05-15 02:01:19.928744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.928775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.940426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e5658 00:32:56.155 [2024-05-15 02:01:19.941514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.941545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.954923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f7538 00:32:56.155 [2024-05-15 02:01:19.956923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.963901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e8088 00:32:56.155 [2024-05-15 02:01:19.964745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.964775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.975887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fef90 00:32:56.155 [2024-05-15 02:01:19.976762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.976792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:19.989794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190eb328 00:32:56.155 [2024-05-15 02:01:19.990514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:19.990560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.003026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f1868 00:32:56.155 [2024-05-15 02:01:20.003880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.003912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.016345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fd208 00:32:56.155 [2024-05-15 02:01:20.017361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.028957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ddc00 00:32:56.155 [2024-05-15 02:01:20.030841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.030877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.040197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f92c0 00:32:56.155 [2024-05-15 02:01:20.041059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.053809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190edd58 00:32:56.155 [2024-05-15 02:01:20.054835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.054867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.067726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f1868 00:32:56.155 [2024-05-15 02:01:20.068907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.068941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:56.155 [2024-05-15 02:01:20.081685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e23b8 00:32:56.155 [2024-05-15 02:01:20.082756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.155 [2024-05-15 02:01:20.082788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:56.413 [2024-05-15 02:01:20.093984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e3d08 00:32:56.413 [2024-05-15 02:01:20.095853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.413 [2024-05-15 02:01:20.095885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:56.413 [2024-05-15 02:01:20.104985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ed0b0 00:32:56.414 [2024-05-15 02:01:20.105840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.105871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.118452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fa7d8 00:32:56.414 [2024-05-15 02:01:20.119458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.119487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.132589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e5ec8 00:32:56.414 [2024-05-15 02:01:20.133835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.133867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.145718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f35f0 00:32:56.414 [2024-05-15 02:01:20.146736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.157839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e5658 00:32:56.414 [2024-05-15 02:01:20.159833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.159873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.169774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8e88 00:32:56.414 [2024-05-15 02:01:20.170647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.170684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.183013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f3a28 00:32:56.414 [2024-05-15 02:01:20.184028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.184059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.197499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190de8a8 00:32:56.414 [2024-05-15 02:01:20.199192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.199230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.210835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8618 00:32:56.414 [2024-05-15 02:01:20.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.212754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.224233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ef6a8 00:32:56.414 [2024-05-15 02:01:20.226264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.226291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.233300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f5be8 00:32:56.414 [2024-05-15 02:01:20.234060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.234091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.246558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f31b8 00:32:56.414 [2024-05-15 02:01:20.247556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.247588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.261027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f8618 00:32:56.414 [2024-05-15 02:01:20.262670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.262702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.274310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fda78 00:32:56.414 [2024-05-15 02:01:20.276118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.276150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.286208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f7538 00:32:56.414 [2024-05-15 02:01:20.287555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.287586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.297879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fc560 00:32:56.414 [2024-05-15 02:01:20.299822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.299853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.308859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190f6020 00:32:56.414 [2024-05-15 02:01:20.309678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.309708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.322169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e7818 00:32:56.414 [2024-05-15 02:01:20.323168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.323199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:56.414 [2024-05-15 02:01:20.335484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190ff3c8 00:32:56.414 [2024-05-15 02:01:20.336646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.414 [2024-05-15 02:01:20.336677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:56.672 [2024-05-15 02:01:20.348900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190e12d8 00:32:56.672 [2024-05-15 02:01:20.350213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.672 [2024-05-15 02:01:20.350250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:56.672 [2024-05-15 02:01:20.361503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190de038 00:32:56.672 [2024-05-15 02:01:20.363379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.672 [2024-05-15 02:01:20.363408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:56.672 [2024-05-15 02:01:20.375652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f3c70) with pdu=0x2000190fb8b8 00:32:56.672 [2024-05-15 02:01:20.377117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.672 [2024-05-15 02:01:20.377148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:56.672 00:32:56.672 Latency(us) 00:32:56.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.672 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.672 nvme0n1 : 2.01 20089.64 78.48 0.00 0.00 6360.05 2706.39 15825.73 00:32:56.672 =================================================================================================================== 00:32:56.672 Total : 20089.64 78.48 0.00 0.00 6360.05 2706.39 15825.73 00:32:56.672 0 00:32:56.672 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:56.672 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:56.672 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:56.672 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:56.672 | .driver_specific 00:32:56.672 | .nvme_error 00:32:56.672 | .status_code 00:32:56.672 | .command_transient_transport_error' 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 14244 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 14244 ']' 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 14244 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 14244 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 14244' 00:32:56.930 killing process with pid 14244 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 14244 00:32:56.930 Received shutdown signal, test time was about 2.000000 seconds 00:32:56.930 00:32:56.930 Latency(us) 00:32:56.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.930 =================================================================================================================== 00:32:56.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.930 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 14244 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=14653 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 14653 /var/tmp/bperf.sock 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 14653 ']' 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:57.189 02:01:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.189 [2024-05-15 02:01:20.925398] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:32:57.189 [2024-05-15 02:01:20.925490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14653 ] 00:32:57.189 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:57.189 Zero copy mechanism will not be used. 00:32:57.189 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.189 [2024-05-15 02:01:20.992095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.189 [2024-05-15 02:01:21.077098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.449 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:57.449 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:32:57.449 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.449 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.744 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:57.744 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.744 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.744 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.744 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.744 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.002 nvme0n1 00:32:58.002 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:58.002 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:58.002 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:58.002 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:58.002 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:58.002 02:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:58.002 Zero copy mechanism will not be used. 00:32:58.002 Running I/O for 2 seconds... 00:32:58.002 [2024-05-15 02:01:21.896871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.002 [2024-05-15 02:01:21.897237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.002 [2024-05-15 02:01:21.897302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.002 [2024-05-15 02:01:21.903868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.002 [2024-05-15 02:01:21.904203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.002 [2024-05-15 02:01:21.904263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.002 [2024-05-15 02:01:21.910831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.002 [2024-05-15 02:01:21.911179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.002 [2024-05-15 02:01:21.911213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.002 [2024-05-15 02:01:21.918290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.002 [2024-05-15 02:01:21.918724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.002 [2024-05-15 02:01:21.918767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.002 [2024-05-15 02:01:21.925424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.002 [2024-05-15 02:01:21.925774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.002 [2024-05-15 02:01:21.925808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.002 [2024-05-15 02:01:21.932128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.002 [2024-05-15 02:01:21.932337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.002 [2024-05-15 02:01:21.932366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.939818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.940200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.940243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.947035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.947371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.947401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.954860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.955239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.955285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.962909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.963240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.963295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.970879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.971275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.971304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.979422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.979796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.979830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.987675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.988015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.988047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:21.994585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:21.994909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:21.994943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.000155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.000480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:22.000510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.005813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.006136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:22.006169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.011448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.011786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:22.011820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.017445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.017795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:22.017829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.024397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:22.024796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.031169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.031501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.262 [2024-05-15 02:01:22.031531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.262 [2024-05-15 02:01:22.037270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.262 [2024-05-15 02:01:22.037697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.037730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.042889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.043232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.043277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.048631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.048987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.049021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.054321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.054685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.054718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.060229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.060587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.060620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.066767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.067091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.067123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.072492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.072893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.072930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.078238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.078573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.078605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.083770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.084093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.084127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.090518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.090856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.090889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.097356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.097571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.097603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.103515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.103863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.103895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.109005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.109341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.109371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.114679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.115036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.115069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.120332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.120684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.120717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.125798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.126113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.126145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.131309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.131629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.131663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.136905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.137243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.137289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.142418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.142753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.142785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.148770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.149108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.149141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.155339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.155660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.155694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.160905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.161224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.161282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.166603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.166936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.166969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.172190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.172543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.172576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.177966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.178306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.178338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.183957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.184298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.184336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.263 [2024-05-15 02:01:22.190805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.263 [2024-05-15 02:01:22.191132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.263 [2024-05-15 02:01:22.191165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.196865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.197189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.197229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.203565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.203890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.203923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.209193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.209518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.209565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.214831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.215156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.215189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.220452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.220777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.220809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.225864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.226165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.226204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.231002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.231300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.231330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.236921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.237213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.237252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.242856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.243177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.248027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.248328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.248358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.253117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.253419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.253450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.258243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.258538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.258568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.263561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.263661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.263688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.269527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.269834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.523 [2024-05-15 02:01:22.269864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.523 [2024-05-15 02:01:22.275961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.523 [2024-05-15 02:01:22.276256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.282414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.282708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.282738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.288437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.288733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.288762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.293983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.294272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.294303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.299744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.300030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.300061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.305485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.305792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.311202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.311486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.311515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.316996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.317284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.317314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.322458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.322733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.322763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.327236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.327501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.327531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.331886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.332149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.332180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.337562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.337824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.337854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.343078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.343347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.343377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.347868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.348127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.348157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.352816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.353077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.353108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.357848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.358109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.358139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.362655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.362947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.367488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.367749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.367786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.372333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.372610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.372654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.377152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.377414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.377444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.381973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.382240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.382271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.386754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.387014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.387044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.391613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.391904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.391934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.396417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.396678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.396708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.401118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.401388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.401417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.406089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.406357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.406388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.410910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.411179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.524 [2024-05-15 02:01:22.411209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.524 [2024-05-15 02:01:22.415788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.524 [2024-05-15 02:01:22.416048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.416078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.420413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.420673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.420703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.425868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.426143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.426188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.431238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.431499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.431543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.436070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.436338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.436368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.440935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.441195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.441234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.445714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.445978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.446008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.525 [2024-05-15 02:01:22.450508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.525 [2024-05-15 02:01:22.450768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.525 [2024-05-15 02:01:22.450797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.455420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.455694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.455738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.460190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.460461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.460492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.465106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.465376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.465406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.470925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.471188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.471227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.476395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.476673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.476703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.483084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.483410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.483441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.489275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.489577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.489605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.494636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.494897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.494937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.499437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.499718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.499749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.505660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.784 [2024-05-15 02:01:22.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.784 [2024-05-15 02:01:22.505997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.784 [2024-05-15 02:01:22.512555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.512829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.512858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.518941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.519312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.519342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.525888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.526165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.526194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.531873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.532148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.532176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.536674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.536933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.536962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.541330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.541592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.541622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.545977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.546244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.546285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.551487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.551747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.551778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.558030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.558313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.558343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.563059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.563331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.563361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.567913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.568178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.568208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.572581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.572842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.572871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.577990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.578263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.578292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.583594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.583856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.583885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.588417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.588678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.588707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.593224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.593484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.593522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.598075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.598342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.598371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.602917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.603177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.603206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.607627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.607887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.607917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.612446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.612710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.612739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.618329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.618600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.618644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.623288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.623549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.623591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.628112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.628380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.628409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.632888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.633150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.633181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.637796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.638065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.638095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.642697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.642972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.643001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.647590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.785 [2024-05-15 02:01:22.647852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.785 [2024-05-15 02:01:22.647882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.785 [2024-05-15 02:01:22.652454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.652716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.658383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.658766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.658795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.664152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.664419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.664450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.669729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.669989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.670020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.674596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.674871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.674900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.680413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.680677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.680706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.686024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.686293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.686324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.691253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.691514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.695987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.696256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.696286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.700861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.701123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.701152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.705711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.705969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.705998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.786 [2024-05-15 02:01:22.710475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:58.786 [2024-05-15 02:01:22.710737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.786 [2024-05-15 02:01:22.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.715720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.715981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.716011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.721728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.721989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.722019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.726790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.727052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.727091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.733061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.733368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.733398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.739439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.739792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.739821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.746457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.746748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.751641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.751905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.751934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.757065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.757337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.045 [2024-05-15 02:01:22.757366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.045 [2024-05-15 02:01:22.762034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.045 [2024-05-15 02:01:22.762301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.762331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.766819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.767081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.767111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.771513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.776589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.776860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.776890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.782244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.782506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.782536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.787019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.787286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.787317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.791876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.792137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.792166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.796609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.796870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.796900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.801238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.801499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.801529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.806391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.806705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.806735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.812606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.812892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.812922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.818714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.819020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.819050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.824411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.824735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.824764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.831698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.831981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.832011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.837470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.837760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.837790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.844045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.844365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.844395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.850795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.851056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.851086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.856213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.856484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.856528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.861364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.861627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.861657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.866608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.866869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.866898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.871627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.871899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.871934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.876911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.877172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.877202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.881818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.882095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.882123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.886756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.887019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.887062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.891612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.891873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.891902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.896381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.896643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.896673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.901021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.901482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.046 [2024-05-15 02:01:22.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.046 [2024-05-15 02:01:22.905908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.046 [2024-05-15 02:01:22.906169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.906199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.911331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.911717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.911747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.918005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.918396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.924284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.924603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.924631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.929689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.929953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.929982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.934704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.934965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.934995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.939742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.940004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.940032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.944527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.944788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.944817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.949418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.949679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.949708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.955166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.955466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.955495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.960349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.960626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.960661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.965118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.965387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.965416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.970025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.970294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.970323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.047 [2024-05-15 02:01:22.974896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.047 [2024-05-15 02:01:22.975158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.047 [2024-05-15 02:01:22.975187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:22.979756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:22.980017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:22.980047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:22.984563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:22.984826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:22.984854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:22.989329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:22.989620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:22.989649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:22.994279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:22.994557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:22.994586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:22.999023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:22.999288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:22.999317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.003909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.004178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.004207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.008686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.008947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.008975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.013524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.013784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.013813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.018392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.018655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.018685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.023163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.023427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.023459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.028037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.028308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.028338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.032769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.033058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.033088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.306 [2024-05-15 02:01:23.038267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.306 [2024-05-15 02:01:23.038527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.306 [2024-05-15 02:01:23.038558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.043628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.043892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.043922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.048444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.048703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.048733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.053239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.053545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.058275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.058537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.058566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.063917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.064319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.064349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.071045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.071323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.071357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.076584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.076870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.076900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.082137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.082405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.082436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.087522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.087783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.087813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.093812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.094073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.094112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.099966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.100240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.100270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.107190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.107466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.107497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.113886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.114183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.114213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.121394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.121670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.121699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.128429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.128763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.128792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.135808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.136187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.136226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.143201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.143507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.143536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.150338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.150684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.150713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.157759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.158128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.158158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.164720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.165091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.165121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.170770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.171043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.171075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.177053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.177361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.177391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.183662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.184049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.184079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.190067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.190336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.190367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.196880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.197194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.197231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.203472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.203742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.203772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.210248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.210543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.210572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.307 [2024-05-15 02:01:23.217327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.307 [2024-05-15 02:01:23.217603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.307 [2024-05-15 02:01:23.217633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.308 [2024-05-15 02:01:23.224693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.308 [2024-05-15 02:01:23.225060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.308 [2024-05-15 02:01:23.225089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.308 [2024-05-15 02:01:23.231466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.308 [2024-05-15 02:01:23.231727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.308 [2024-05-15 02:01:23.231758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.237315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.237579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.237609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.243542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.243803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.243833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.249351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.249613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.249643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.254995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.255264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.260020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.260318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.264794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.265053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.265093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.269695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.269957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.269988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.274392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.274655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.274686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.279509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.279788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.279817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.285336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.285613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.285658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.290370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.290662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.290693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.295278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.295556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.295584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.300046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.300312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.300341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.304795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.305058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.305088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.309659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.309927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.309956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.314534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.314795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.567 [2024-05-15 02:01:23.314824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.567 [2024-05-15 02:01:23.319437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.567 [2024-05-15 02:01:23.319698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.319728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.324337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.324600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.324629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.328992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.329260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.329290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.334426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.334690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.334719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.339949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.340209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.340245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.344776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.345035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.345065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.349657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.349917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.349955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.354605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.354866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.354897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.359270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.359546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.359575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.364107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.364375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.364404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.368818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.369075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.369105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.373553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.373818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.373849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.378205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.378503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.383080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.383349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.383379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.388788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.389049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.389093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.393694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.393965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.393995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.398568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.398828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.398858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.403482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.403743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.403773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.408303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.408589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.408622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.413533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.413836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.413866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.418683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.418974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.419007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.424498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.424802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.424834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.430509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.430802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.430834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.435780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.436071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.436103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.441076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.441393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.441422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.446362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.446685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.446718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.451590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.451878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.451910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.456860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.568 [2024-05-15 02:01:23.457149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.568 [2024-05-15 02:01:23.457182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.568 [2024-05-15 02:01:23.462197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.462487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.462535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.569 [2024-05-15 02:01:23.467557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.467847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.467880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.569 [2024-05-15 02:01:23.473730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.474020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.474052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.569 [2024-05-15 02:01:23.479421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.479723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.569 [2024-05-15 02:01:23.484715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.485005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.485043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.569 [2024-05-15 02:01:23.489954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.490266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.490295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.569 [2024-05-15 02:01:23.495816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.569 [2024-05-15 02:01:23.496111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.569 [2024-05-15 02:01:23.496144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.501450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.501750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.501783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.507000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.507308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.507339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.512814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.513103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.513135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.518774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.519066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.525644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.525938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.525971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.531755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.532094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.532126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.538720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.539015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.545079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.545395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.545424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.550507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.550799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.550831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.555835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.556121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.556153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.561060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.561389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.561419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.566292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.566581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.566614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.572708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.572997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.573029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.578455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.578756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.578788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.585323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.585611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.585644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.592124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.592566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.599131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.599430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.599460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.604735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.605026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.605060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.609994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.829 [2024-05-15 02:01:23.610307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.829 [2024-05-15 02:01:23.610336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.829 [2024-05-15 02:01:23.615429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.615724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.615756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.622014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.622328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.622357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.627247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.627544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.627577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.632502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.632808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.632842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.637880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.638169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.638208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.643081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.643379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.643410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.648337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.648638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.648672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.653667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.653958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.653991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.658890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.659177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.659209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.664257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.664521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.664550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.669450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.669745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.669778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.674760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.675048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.675080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.679849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.680138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.680170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.685741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.686037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.686069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.691645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.691933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.691965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.697658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.697812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.697842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.703983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.704295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.704324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.710423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.710720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.710753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.716775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.717068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.717101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.723084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.723389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.723418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.729311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.729592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.729625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.735812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.736100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.742331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.742619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.742650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.748737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.749028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.749060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:59.830 [2024-05-15 02:01:23.755088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:32:59.830 [2024-05-15 02:01:23.755390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.830 [2024-05-15 02:01:23.755419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.761489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.761794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.761827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.766950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.767247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.767298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.772103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.772398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.772428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.777468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.777776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.777809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.783277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.783541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.783587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.788482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.788781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.788819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.793715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.794005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.794038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.798850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.799138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.799171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.804258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.804526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.804572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.809880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.810169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.810202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.815039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.815353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.815383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.820317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.820603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.820636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.826110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.826406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.826437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.831596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.831887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.831919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.838187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.838481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.838526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.845336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.845623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.845655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.852456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.852836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.089 [2024-05-15 02:01:23.852869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.089 [2024-05-15 02:01:23.860022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.089 [2024-05-15 02:01:23.860388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.090 [2024-05-15 02:01:23.860431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.090 [2024-05-15 02:01:23.867797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.090 [2024-05-15 02:01:23.868169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.090 [2024-05-15 02:01:23.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:00.090 [2024-05-15 02:01:23.875690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.090 [2024-05-15 02:01:23.876103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.090 [2024-05-15 02:01:23.876136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:00.090 [2024-05-15 02:01:23.883717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.090 [2024-05-15 02:01:23.884086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.090 [2024-05-15 02:01:23.884119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:00.090 [2024-05-15 02:01:23.891395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18f5300) with pdu=0x2000190fef90 00:33:00.090 [2024-05-15 02:01:23.891688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:00.090 [2024-05-15 02:01:23.891720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:00.090 00:33:00.090 Latency(us) 00:33:00.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.090 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:00.090 nvme0n1 : 2.00 5487.10 685.89 0.00 0.00 2907.87 2172.40 8932.31 00:33:00.090 =================================================================================================================== 00:33:00.090 Total : 5487.10 685.89 0.00 0.00 2907.87 2172.40 8932.31 00:33:00.090 0 00:33:00.090 02:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:00.090 02:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:00.090 02:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:00.090 | .driver_specific 00:33:00.090 | .nvme_error 00:33:00.090 | .status_code 00:33:00.090 | .command_transient_transport_error' 00:33:00.090 02:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 14653 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 14653 ']' 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 14653 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 14653 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 14653' 00:33:00.347 killing process with pid 14653 00:33:00.347 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 14653 00:33:00.347 Received shutdown signal, test time was about 2.000000 seconds 00:33:00.347 00:33:00.347 Latency(us) 00:33:00.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.347 =================================================================================================================== 00:33:00.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.348 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 14653 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 13294 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 13294 ']' 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 13294 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 13294 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 13294' 00:33:00.605 killing process with pid 13294 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 13294 00:33:00.605 [2024-05-15 02:01:24.465103] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:00.605 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 13294 00:33:00.863 00:33:00.863 real 0m15.145s 00:33:00.863 user 0m29.636s 00:33:00.863 sys 0m4.326s 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.863 ************************************ 00:33:00.863 END TEST nvmf_digest_error 00:33:00.863 ************************************ 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:00.863 rmmod nvme_tcp 00:33:00.863 rmmod nvme_fabrics 00:33:00.863 rmmod nvme_keyring 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 13294 ']' 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 13294 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 13294 ']' 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 13294 00:33:00.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (13294) - No such process 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 13294 is not found' 00:33:00.863 Process with pid 13294 is not found 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.863 02:01:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.395 02:01:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:03.395 00:33:03.395 real 0m35.248s 00:33:03.395 user 1m0.517s 00:33:03.395 sys 0m10.544s 00:33:03.395 02:01:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:03.395 02:01:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:03.395 ************************************ 00:33:03.395 END TEST nvmf_digest 00:33:03.395 ************************************ 00:33:03.395 02:01:26 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:33:03.395 02:01:26 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:33:03.395 02:01:26 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:33:03.395 02:01:26 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:03.395 02:01:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:03.395 02:01:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:03.395 02:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.395 ************************************ 00:33:03.395 START TEST nvmf_bdevperf 00:33:03.395 ************************************ 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:03.395 * Looking for test storage... 00:33:03.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.395 02:01:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.396 02:01:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:05.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:05.919 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:05.919 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:05.920 Found net devices under 0000:09:00.0: cvl_0_0 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:05.920 Found net devices under 0000:09:00.1: cvl_0_1 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:05.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:33:05.920 00:33:05.920 --- 10.0.0.2 ping statistics --- 00:33:05.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.920 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:33:05.920 00:33:05.920 --- 10.0.0.1 ping statistics --- 00:33:05.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.920 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=17310 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 17310 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 17310 ']' 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:05.920 [2024-05-15 02:01:29.578508] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:05.920 [2024-05-15 02:01:29.578596] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.920 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.920 [2024-05-15 02:01:29.650731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:05.920 [2024-05-15 02:01:29.738134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.920 [2024-05-15 02:01:29.738179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.920 [2024-05-15 02:01:29.738224] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.920 [2024-05-15 02:01:29.738237] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.920 [2024-05-15 02:01:29.738249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.920 [2024-05-15 02:01:29.738373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.920 [2024-05-15 02:01:29.738437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:05.920 [2024-05-15 02:01:29.738440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:05.920 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 [2024-05-15 02:01:29.873813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 Malloc0 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:06.178 [2024-05-15 02:01:29.943457] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:06.178 [2024-05-15 02:01:29.943801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:06.178 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:06.178 { 00:33:06.178 "params": { 00:33:06.178 "name": "Nvme$subsystem", 00:33:06.178 "trtype": "$TEST_TRANSPORT", 00:33:06.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:06.178 "adrfam": "ipv4", 00:33:06.178 "trsvcid": "$NVMF_PORT", 00:33:06.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:06.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:06.178 "hdgst": ${hdgst:-false}, 00:33:06.179 "ddgst": ${ddgst:-false} 00:33:06.179 }, 00:33:06.179 "method": "bdev_nvme_attach_controller" 00:33:06.179 } 00:33:06.179 EOF 00:33:06.179 )") 00:33:06.179 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:06.179 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:06.179 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:06.179 02:01:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:06.179 "params": { 00:33:06.179 "name": "Nvme1", 00:33:06.179 "trtype": "tcp", 00:33:06.179 "traddr": "10.0.0.2", 00:33:06.179 "adrfam": "ipv4", 00:33:06.179 "trsvcid": "4420", 00:33:06.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:06.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:06.179 "hdgst": false, 00:33:06.179 "ddgst": false 00:33:06.179 }, 00:33:06.179 "method": "bdev_nvme_attach_controller" 00:33:06.179 }' 00:33:06.179 [2024-05-15 02:01:29.990024] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:06.179 [2024-05-15 02:01:29.990099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17456 ] 00:33:06.179 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.179 [2024-05-15 02:01:30.064797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.436 [2024-05-15 02:01:30.159720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.693 Running I/O for 1 seconds... 00:33:07.625 00:33:07.625 Latency(us) 00:33:07.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.625 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:07.625 Verification LBA range: start 0x0 length 0x4000 00:33:07.625 Nvme1n1 : 1.01 8675.57 33.89 0.00 0.00 14691.49 2961.26 13883.92 00:33:07.625 =================================================================================================================== 00:33:07.625 Total : 8675.57 33.89 0.00 0.00 14691.49 2961.26 13883.92 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=17593 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:07.882 { 00:33:07.882 "params": { 00:33:07.882 "name": "Nvme$subsystem", 00:33:07.882 "trtype": "$TEST_TRANSPORT", 00:33:07.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.882 "adrfam": "ipv4", 00:33:07.882 "trsvcid": "$NVMF_PORT", 00:33:07.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.882 "hdgst": ${hdgst:-false}, 00:33:07.882 "ddgst": ${ddgst:-false} 00:33:07.882 }, 00:33:07.882 "method": "bdev_nvme_attach_controller" 00:33:07.882 } 00:33:07.882 EOF 00:33:07.882 )") 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:07.882 02:01:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:07.882 "params": { 00:33:07.882 "name": "Nvme1", 00:33:07.882 "trtype": "tcp", 00:33:07.882 "traddr": "10.0.0.2", 00:33:07.882 "adrfam": "ipv4", 00:33:07.882 "trsvcid": "4420", 00:33:07.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.883 "hdgst": false, 00:33:07.883 "ddgst": false 00:33:07.883 }, 00:33:07.883 "method": "bdev_nvme_attach_controller" 00:33:07.883 }' 00:33:07.883 [2024-05-15 02:01:31.730667] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:07.883 [2024-05-15 02:01:31.730756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17593 ] 00:33:07.883 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.883 [2024-05-15 02:01:31.800821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.140 [2024-05-15 02:01:31.887265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.140 Running I/O for 15 seconds... 00:33:11.423 02:01:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 17310 00:33:11.423 02:01:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:11.423 [2024-05-15 02:01:34.704841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.423 [2024-05-15 02:01:34.704901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.704936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.704954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.704976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.704993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.423 [2024-05-15 02:01:34.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.423 [2024-05-15 02:01:34.705744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.705970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.705990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.424 [2024-05-15 02:01:34.706970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.706988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.424 [2024-05-15 02:01:34.707005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.707022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.424 [2024-05-15 02:01:34.707038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.707055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.424 [2024-05-15 02:01:34.707071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.707089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.424 [2024-05-15 02:01:34.707105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.424 [2024-05-15 02:01:34.707122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.424 [2024-05-15 02:01:34.707138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.425 [2024-05-15 02:01:34.707173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:11.425 [2024-05-15 02:01:34.707207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.707974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.707991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.425 [2024-05-15 02:01:34.708526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.425 [2024-05-15 02:01:34.708542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.708989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:11.426 [2024-05-15 02:01:34.709520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ab010 is same with the state(5) to be set 00:33:11.426 [2024-05-15 02:01:34.709558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:11.426 [2024-05-15 02:01:34.709571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:11.426 [2024-05-15 02:01:34.709584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53184 len:8 PRP1 0x0 PRP2 0x0 00:33:11.426 [2024-05-15 02:01:34.709599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:11.426 [2024-05-15 02:01:34.709670] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6ab010 was disconnected and freed. reset controller. 00:33:11.426 [2024-05-15 02:01:34.713376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.426 [2024-05-15 02:01:34.713446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.426 [2024-05-15 02:01:34.714166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.426 [2024-05-15 02:01:34.714375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.426 [2024-05-15 02:01:34.714401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.426 [2024-05-15 02:01:34.714423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.426 [2024-05-15 02:01:34.714681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.426 [2024-05-15 02:01:34.714928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.426 [2024-05-15 02:01:34.714951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.426 [2024-05-15 02:01:34.714970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.426 [2024-05-15 02:01:34.718653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.426 [2024-05-15 02:01:34.727626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.426 [2024-05-15 02:01:34.728030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.426 [2024-05-15 02:01:34.728162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.426 [2024-05-15 02:01:34.728192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.426 [2024-05-15 02:01:34.728211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.426 [2024-05-15 02:01:34.728470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.426 [2024-05-15 02:01:34.728732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.426 [2024-05-15 02:01:34.728757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.426 [2024-05-15 02:01:34.728773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.426 [2024-05-15 02:01:34.732423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.426 [2024-05-15 02:01:34.741655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.426 [2024-05-15 02:01:34.742142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.426 [2024-05-15 02:01:34.742345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.426 [2024-05-15 02:01:34.742375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.426 [2024-05-15 02:01:34.742393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.426 [2024-05-15 02:01:34.742636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.426 [2024-05-15 02:01:34.742884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.426 [2024-05-15 02:01:34.742909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.426 [2024-05-15 02:01:34.742925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.426 [2024-05-15 02:01:34.746568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.426 [2024-05-15 02:01:34.755584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.426 [2024-05-15 02:01:34.756072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.756234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.756263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.756281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.756529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.756777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.756803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.756819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.760466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.769685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.770145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.770336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.770365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.770384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.770627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.770875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.770900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.770917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.774558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.783772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.784164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.784391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.784418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.784435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.784695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.784943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.784968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.784984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.788623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.797836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.798238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.798408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.798436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.798454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.798696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.798954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.798980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.798997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.802638] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.811851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.812239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.812359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.812388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.812406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.812649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.812896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.812921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.812938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.816577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.825794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.826205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.826359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.826388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.826405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.826647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.826895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.826921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.826937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.830578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.839795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.840213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.840386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.840414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.840432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.840674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.840922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.840952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.427 [2024-05-15 02:01:34.840970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.427 [2024-05-15 02:01:34.844607] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.427 [2024-05-15 02:01:34.853817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.427 [2024-05-15 02:01:34.854192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.854345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.427 [2024-05-15 02:01:34.854374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.427 [2024-05-15 02:01:34.854391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.427 [2024-05-15 02:01:34.854633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.427 [2024-05-15 02:01:34.854879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.427 [2024-05-15 02:01:34.854904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.854921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.858560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.867770] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.868149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.868327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.868354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.868370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.868636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.868884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.868910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.868926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.872565] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.881772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.882145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.882313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.882342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.882360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.882603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.882850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.882876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.882898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.886537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.895752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.896131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.896301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.896330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.896347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.896590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.896837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.896863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.896879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.900521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.909738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.910148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.910320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.910349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.910366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.910609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.910857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.910882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.910899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.914537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.923747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.924150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.924276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.924305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.924323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.924567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.924813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.924839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.924856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.928500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.937714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.938126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.938279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.938307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.938326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.938568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.938814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.938840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.938857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.942499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.951705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.952112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.952275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.952304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.952322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.952564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.952812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.952836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.952852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.956493] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.965717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.966114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.966276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.966306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.966323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.966566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.966812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.966837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.966852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.970489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.979703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.980108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.980275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.980304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.428 [2024-05-15 02:01:34.980322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.428 [2024-05-15 02:01:34.980564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.428 [2024-05-15 02:01:34.980812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.428 [2024-05-15 02:01:34.980836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.428 [2024-05-15 02:01:34.980852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.428 [2024-05-15 02:01:34.984494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.428 [2024-05-15 02:01:34.993702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.428 [2024-05-15 02:01:34.994102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.994233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.428 [2024-05-15 02:01:34.994262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:34.994280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:34.994522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:34.994771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:34.994796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:34.994812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:34.998448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.007656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.008065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.008169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.008198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.008224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.008470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.008718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.008744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.008761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.012397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.021603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.022012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.022154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.022182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.022200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.022451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.022698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.022724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.022740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.026375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.035581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.035990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.036152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.036180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.036197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.036451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.036698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.036723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.036740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.040379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.049594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.050004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.050119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.050147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.050164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.050418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.050666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.050691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.050708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.054343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.063547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.063948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.064083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.064116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.064134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.064390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.064636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.064661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.064677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.068313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.077551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.077951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.078123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.078151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.078168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.078424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.078673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.078698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.078715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.082352] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.091556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.091956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.092117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.092145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.092163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.092420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.092669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.092695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.092712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.096347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.105555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.105927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.106064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.106092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.106115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.106370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.106617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.106643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.106659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.429 [2024-05-15 02:01:35.110298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.429 [2024-05-15 02:01:35.119507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.429 [2024-05-15 02:01:35.119906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.120069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.429 [2024-05-15 02:01:35.120097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.429 [2024-05-15 02:01:35.120115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.429 [2024-05-15 02:01:35.120373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.429 [2024-05-15 02:01:35.120622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.429 [2024-05-15 02:01:35.120647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.429 [2024-05-15 02:01:35.120664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.124301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.133510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.133909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.134069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.134097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.134115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.134370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.134616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.134641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.134658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.138390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.147601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.148010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.148231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.148260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.148278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.148527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.148775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.148801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.148817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.152452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.161662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.162074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.162239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.162265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.162282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.162535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.162781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.162806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.162822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.166468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.175696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.176107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.176230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.176258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.176274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.176532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.176779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.176804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.176820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.180457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.189656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.190051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.190190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.190225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.190246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.190488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.190740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.190765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.190782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.194413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.203618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.204016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.204126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.204153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.204171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.204421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.204669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.204693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.204709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.208346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.217557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.217978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.218110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.218137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.218153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.218413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.218660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.218686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.218702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.222338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.231548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.231920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.232055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.232081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.232099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.232352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.232599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.232624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.232646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.236282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.245521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.245921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.246101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.246128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.246144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.430 [2024-05-15 02:01:35.246411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.430 [2024-05-15 02:01:35.246659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.430 [2024-05-15 02:01:35.246684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.430 [2024-05-15 02:01:35.246700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.430 [2024-05-15 02:01:35.250351] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.430 [2024-05-15 02:01:35.259567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.430 [2024-05-15 02:01:35.259974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.260145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.430 [2024-05-15 02:01:35.260174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.430 [2024-05-15 02:01:35.260192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.260447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.260694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.260718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.260734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.264375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.431 [2024-05-15 02:01:35.273589] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.431 [2024-05-15 02:01:35.274057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.274162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.274186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.431 [2024-05-15 02:01:35.274203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.274463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.274711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.274735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.274752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.278397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.431 [2024-05-15 02:01:35.287612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.431 [2024-05-15 02:01:35.288011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.288149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.288177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.431 [2024-05-15 02:01:35.288196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.288447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.288695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.288719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.288736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.292369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.431 [2024-05-15 02:01:35.301573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.431 [2024-05-15 02:01:35.302025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.302155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.302182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.431 [2024-05-15 02:01:35.302200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.302452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.302701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.302727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.302743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.306386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.431 [2024-05-15 02:01:35.315628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.431 [2024-05-15 02:01:35.316028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.316200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.316234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.431 [2024-05-15 02:01:35.316262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.316505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.316753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.316779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.316795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.320434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.431 [2024-05-15 02:01:35.329663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.431 [2024-05-15 02:01:35.330089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.330206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.330241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.431 [2024-05-15 02:01:35.330263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.330516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.330763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.330788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.330804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.334444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.431 [2024-05-15 02:01:35.343669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.431 [2024-05-15 02:01:35.344167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.344314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.431 [2024-05-15 02:01:35.344355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.431 [2024-05-15 02:01:35.344371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.431 [2024-05-15 02:01:35.344615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.431 [2024-05-15 02:01:35.344875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.431 [2024-05-15 02:01:35.344900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.431 [2024-05-15 02:01:35.344917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.431 [2024-05-15 02:01:35.348598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.690 [2024-05-15 02:01:35.357660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.690 [2024-05-15 02:01:35.358069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.690 [2024-05-15 02:01:35.358204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.690 [2024-05-15 02:01:35.358245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.690 [2024-05-15 02:01:35.358265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.690 [2024-05-15 02:01:35.358508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.690 [2024-05-15 02:01:35.358756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.690 [2024-05-15 02:01:35.358781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.690 [2024-05-15 02:01:35.358798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.690 [2024-05-15 02:01:35.362436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.690 [2024-05-15 02:01:35.371679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.690 [2024-05-15 02:01:35.372177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.690 [2024-05-15 02:01:35.372304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.690 [2024-05-15 02:01:35.372332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.690 [2024-05-15 02:01:35.372350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.690 [2024-05-15 02:01:35.372592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.690 [2024-05-15 02:01:35.372839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.690 [2024-05-15 02:01:35.372864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.690 [2024-05-15 02:01:35.372880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.690 [2024-05-15 02:01:35.376522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.690 [2024-05-15 02:01:35.385727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.690 [2024-05-15 02:01:35.386174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.690 [2024-05-15 02:01:35.386345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.690 [2024-05-15 02:01:35.386373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.690 [2024-05-15 02:01:35.386391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.690 [2024-05-15 02:01:35.386634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.690 [2024-05-15 02:01:35.386882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.690 [2024-05-15 02:01:35.386907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.690 [2024-05-15 02:01:35.386924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.390563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.399773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.400147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.400282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.400311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.400328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.400570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.400817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.400842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.400857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.404496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.413718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.414116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.414232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.414260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.414278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.414520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.414769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.414794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.414811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.418455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.427701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.428106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.428271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.428301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.428319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.428561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.428809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.428833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.428850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.432491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.441736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.442140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.442296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.442326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.442344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.442587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.442835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.442864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.442881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.446521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.455758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.456165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.456296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.456326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.456354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.456597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.456845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.456869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.456885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.460529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.469755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.470163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.470306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.470336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.470354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.470596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.470843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.470867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.470883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.474529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.483797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.484210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.484335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.484365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.484383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.484626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.484874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.484898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.484914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.488560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.497782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.498182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.498312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.498340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.498358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.498607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.498854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.498878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.498894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.502542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.511778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.512191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.512339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.512368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.512387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.691 [2024-05-15 02:01:35.512629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.691 [2024-05-15 02:01:35.512877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.691 [2024-05-15 02:01:35.512901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.691 [2024-05-15 02:01:35.512917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.691 [2024-05-15 02:01:35.516554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.691 [2024-05-15 02:01:35.525785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.691 [2024-05-15 02:01:35.526165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.526316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.691 [2024-05-15 02:01:35.526345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.691 [2024-05-15 02:01:35.526362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.526605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.526852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.526877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.526893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.530530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.692 [2024-05-15 02:01:35.539753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.692 [2024-05-15 02:01:35.540163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.540290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.540319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.692 [2024-05-15 02:01:35.540337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.540579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.540832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.540857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.540873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.544507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.692 [2024-05-15 02:01:35.553718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.692 [2024-05-15 02:01:35.554124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.554268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.554298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.692 [2024-05-15 02:01:35.554316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.554559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.554806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.554830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.554846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.558482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.692 [2024-05-15 02:01:35.567689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.692 [2024-05-15 02:01:35.568075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.568206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.568244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.692 [2024-05-15 02:01:35.568262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.568505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.568751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.568775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.568791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.572422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.692 [2024-05-15 02:01:35.581631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.692 [2024-05-15 02:01:35.582041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.582221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.582251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.692 [2024-05-15 02:01:35.582269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.582511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.582758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.582788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.582805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.586439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.692 [2024-05-15 02:01:35.595638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.692 [2024-05-15 02:01:35.596037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.596174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.596202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.692 [2024-05-15 02:01:35.596228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.596473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.596721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.596745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.596761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.600392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.692 [2024-05-15 02:01:35.609597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.692 [2024-05-15 02:01:35.609985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.610125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.692 [2024-05-15 02:01:35.610152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.692 [2024-05-15 02:01:35.610170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.692 [2024-05-15 02:01:35.610421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.692 [2024-05-15 02:01:35.610668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.692 [2024-05-15 02:01:35.610692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.692 [2024-05-15 02:01:35.610709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.692 [2024-05-15 02:01:35.614340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.623614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.624016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.624164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.624192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.624211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.624464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.624711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.624735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.624757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.628417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.637621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.638029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.638143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.638173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.638191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.638443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.638690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.638714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.638731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.642372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.651583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.651984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.652144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.652173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.652191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.652443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.652690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.652714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.652730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.656361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.665564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.665970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.666107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.666136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.666154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.666407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.666654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.666678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.666695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.670331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.679533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.679931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.680081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.680109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.680127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.680382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.680629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.680653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.680670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.684302] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.693504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.693903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.694066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.694095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.694113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.694366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.694613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.694638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.694654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.698288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.707494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.707902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.708141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.708169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.708187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.952 [2024-05-15 02:01:35.708439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.952 [2024-05-15 02:01:35.708686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.952 [2024-05-15 02:01:35.708710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.952 [2024-05-15 02:01:35.708726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.952 [2024-05-15 02:01:35.712358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.952 [2024-05-15 02:01:35.721586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.952 [2024-05-15 02:01:35.721976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.722142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.952 [2024-05-15 02:01:35.722170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.952 [2024-05-15 02:01:35.722188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.722442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.722689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.722713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.722730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.726358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.735760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.736175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.736337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.736367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.736385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.736627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.736873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.736897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.736914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.740556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.749766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.750144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.750282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.750312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.750330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.750572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.750819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.750843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.750859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.754494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.763705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.764102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.764262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.764293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.764311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.764554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.764801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.764826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.764842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.768478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.777694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.778197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.778378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.778408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.778426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.778668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.778915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.778939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.778956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.782592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.791794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.792260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.792423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.792452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.792470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.792711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.792958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.792982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.792999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.796634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.805837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.806238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.806377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.806411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.806429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.806672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.806919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.806943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.806960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.810593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.819793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.820194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.820361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.820390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.820408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.820651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.820898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.820922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.820938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.824577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.833774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.834177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.834328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.834357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.834375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.834617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.834865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.834889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.834906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.838539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.847744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.953 [2024-05-15 02:01:35.848165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.848327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.953 [2024-05-15 02:01:35.848356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.953 [2024-05-15 02:01:35.848380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.953 [2024-05-15 02:01:35.848623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.953 [2024-05-15 02:01:35.848870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.953 [2024-05-15 02:01:35.848895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.953 [2024-05-15 02:01:35.848911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.953 [2024-05-15 02:01:35.852543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.953 [2024-05-15 02:01:35.861748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.954 [2024-05-15 02:01:35.862152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.954 [2024-05-15 02:01:35.862315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.954 [2024-05-15 02:01:35.862345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.954 [2024-05-15 02:01:35.862364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.954 [2024-05-15 02:01:35.862607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.954 [2024-05-15 02:01:35.862854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.954 [2024-05-15 02:01:35.862879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.954 [2024-05-15 02:01:35.862895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.954 [2024-05-15 02:01:35.866532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:11.954 [2024-05-15 02:01:35.875736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:11.954 [2024-05-15 02:01:35.876120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.954 [2024-05-15 02:01:35.876255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.954 [2024-05-15 02:01:35.876284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:11.954 [2024-05-15 02:01:35.876302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:11.954 [2024-05-15 02:01:35.876545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:11.954 [2024-05-15 02:01:35.876792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:11.954 [2024-05-15 02:01:35.876817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:11.954 [2024-05-15 02:01:35.876833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:11.954 [2024-05-15 02:01:35.880498] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.212 [2024-05-15 02:01:35.889764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.212 [2024-05-15 02:01:35.890184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.212 [2024-05-15 02:01:35.890333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.212 [2024-05-15 02:01:35.890363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.212 [2024-05-15 02:01:35.890381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.890629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.890877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.890902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.890918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.894549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.903760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.904138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.904307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.904338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.904356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.904599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.904846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.904871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.904887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.908523] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.917730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.918136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.918305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.918334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.918352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.918595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.918842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.918866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.918882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.922517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.931724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.932123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.932240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.932269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.932287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.932529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.932782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.932807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.932823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.936460] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.945680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.946169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.946352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.946381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.946399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.946641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.946888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.946912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.946928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.950560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.959767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.960265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.960410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.960439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.960457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.960699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.960946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.960971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.960987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.964622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.973829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.974305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.974437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.974466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.974484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.974726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.974973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.975002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.975019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.978654] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:35.987868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:35.988244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.988354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:35.988383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:35.988401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:35.988644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:35.988891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:35.988916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:35.988932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:35.992570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:36.001776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:36.002177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:36.002329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:36.002359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:36.002377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:36.002619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:36.002866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:36.002891] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:36.002906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:36.006538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.213 [2024-05-15 02:01:36.015741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.213 [2024-05-15 02:01:36.016146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:36.016311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.213 [2024-05-15 02:01:36.016341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.213 [2024-05-15 02:01:36.016360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.213 [2024-05-15 02:01:36.016602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.213 [2024-05-15 02:01:36.016849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.213 [2024-05-15 02:01:36.016874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.213 [2024-05-15 02:01:36.016894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.213 [2024-05-15 02:01:36.020532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.029740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.030139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.030252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.030282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.030300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.030542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.030789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.030814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.030830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.034467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.043672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.044082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.044230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.044261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.044279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.044521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.044768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.044793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.044809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.048442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.057652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.058050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.058222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.058252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.058270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.058513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.058760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.058784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.058800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.062431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.071641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.072025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.072190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.072228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.072248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.072491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.072738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.072762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.072778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.076410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.085613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.086077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.086243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.086272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.086290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.086533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.086779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.086804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.086820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.090457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.099666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.100061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.100200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.100237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.100256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.100498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.100745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.100770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.100786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.104417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.113620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.114028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.114193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.114231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.114252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.114495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.114741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.114765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.114781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.118417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.127624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.128029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.128192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.128229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.128249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.128492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.128738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.128763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.128779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.214 [2024-05-15 02:01:36.132411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.214 [2024-05-15 02:01:36.141655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.214 [2024-05-15 02:01:36.142145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.142341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.214 [2024-05-15 02:01:36.142370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.214 [2024-05-15 02:01:36.142389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.214 [2024-05-15 02:01:36.142631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.214 [2024-05-15 02:01:36.142882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.214 [2024-05-15 02:01:36.142907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.214 [2024-05-15 02:01:36.142927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.473 [2024-05-15 02:01:36.146586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.473 [2024-05-15 02:01:36.155603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.473 [2024-05-15 02:01:36.155979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.473 [2024-05-15 02:01:36.156149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.473 [2024-05-15 02:01:36.156178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.473 [2024-05-15 02:01:36.156195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.473 [2024-05-15 02:01:36.156448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.473 [2024-05-15 02:01:36.156696] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.473 [2024-05-15 02:01:36.156721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.473 [2024-05-15 02:01:36.156737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.473 [2024-05-15 02:01:36.160445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.473 [2024-05-15 02:01:36.169655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.473 [2024-05-15 02:01:36.170056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.473 [2024-05-15 02:01:36.170228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.473 [2024-05-15 02:01:36.170258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.473 [2024-05-15 02:01:36.170276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.473 [2024-05-15 02:01:36.170519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.473 [2024-05-15 02:01:36.170766] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.473 [2024-05-15 02:01:36.170791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.473 [2024-05-15 02:01:36.170806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.473 [2024-05-15 02:01:36.174441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.473 [2024-05-15 02:01:36.183645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.473 [2024-05-15 02:01:36.184112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.473 [2024-05-15 02:01:36.184252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.473 [2024-05-15 02:01:36.184281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.473 [2024-05-15 02:01:36.184298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.473 [2024-05-15 02:01:36.184541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.473 [2024-05-15 02:01:36.184788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.473 [2024-05-15 02:01:36.184812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.184829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.188465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.197671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.198079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.198242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.198271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.198295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.198538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.198784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.198809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.198826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.202456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.211656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.212058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.212225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.212255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.212273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.212514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.212761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.212785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.212802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.216434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.225636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.226104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.226254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.226283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.226302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.226544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.226791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.226815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.226831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.230465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.239676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.240050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.240181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.240209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.240238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.240486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.240734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.240758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.240774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.244408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.253611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.254012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.254171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.254199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.254227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.254471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.254718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.254743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.254759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.258390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.267586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.267995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.268129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.268157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.268175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.268428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.268675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.268699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.268715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.272348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.281549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.281956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.282090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.282118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.282136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.282388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.282641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.282666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.282681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.286316] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.295527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.295921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.296087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.296115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.296133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.296387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.296634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.296658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.296674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.300304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.309509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.309960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.310124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.310152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.310170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.474 [2024-05-15 02:01:36.310422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.474 [2024-05-15 02:01:36.310669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.474 [2024-05-15 02:01:36.310693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.474 [2024-05-15 02:01:36.310710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.474 [2024-05-15 02:01:36.314341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.474 [2024-05-15 02:01:36.323540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.474 [2024-05-15 02:01:36.324016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.324161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.474 [2024-05-15 02:01:36.324189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.474 [2024-05-15 02:01:36.324208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.475 [2024-05-15 02:01:36.324460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.475 [2024-05-15 02:01:36.324708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.475 [2024-05-15 02:01:36.324737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.475 [2024-05-15 02:01:36.324754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.475 [2024-05-15 02:01:36.328388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.475 [2024-05-15 02:01:36.337594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.475 [2024-05-15 02:01:36.337978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.338143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.338170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.475 [2024-05-15 02:01:36.338189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.475 [2024-05-15 02:01:36.338441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.475 [2024-05-15 02:01:36.338688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.475 [2024-05-15 02:01:36.338713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.475 [2024-05-15 02:01:36.338729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.475 [2024-05-15 02:01:36.342363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.475 [2024-05-15 02:01:36.351563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.475 [2024-05-15 02:01:36.352029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.352163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.352191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.475 [2024-05-15 02:01:36.352225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.475 [2024-05-15 02:01:36.352469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.475 [2024-05-15 02:01:36.352729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.475 [2024-05-15 02:01:36.352753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.475 [2024-05-15 02:01:36.352769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.475 [2024-05-15 02:01:36.356402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.475 [2024-05-15 02:01:36.365605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.475 [2024-05-15 02:01:36.366003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.366131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.366160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.475 [2024-05-15 02:01:36.366178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.475 [2024-05-15 02:01:36.366430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.475 [2024-05-15 02:01:36.366677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.475 [2024-05-15 02:01:36.366701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.475 [2024-05-15 02:01:36.366722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.475 [2024-05-15 02:01:36.370367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.475 [2024-05-15 02:01:36.379579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.475 [2024-05-15 02:01:36.379976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.380140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.380169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.475 [2024-05-15 02:01:36.380186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.475 [2024-05-15 02:01:36.380443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.475 [2024-05-15 02:01:36.380691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.475 [2024-05-15 02:01:36.380716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.475 [2024-05-15 02:01:36.380732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.475 [2024-05-15 02:01:36.384386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.475 [2024-05-15 02:01:36.393598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.475 [2024-05-15 02:01:36.394014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.394175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.475 [2024-05-15 02:01:36.394213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.475 [2024-05-15 02:01:36.394242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.475 [2024-05-15 02:01:36.394496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.475 [2024-05-15 02:01:36.394743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.475 [2024-05-15 02:01:36.394767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.475 [2024-05-15 02:01:36.394783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.475 [2024-05-15 02:01:36.398420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.734 [2024-05-15 02:01:36.407699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.734 [2024-05-15 02:01:36.408100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.408248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.408278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.734 [2024-05-15 02:01:36.408296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.734 [2024-05-15 02:01:36.408538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.734 [2024-05-15 02:01:36.408786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.734 [2024-05-15 02:01:36.408813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.734 [2024-05-15 02:01:36.408830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.734 [2024-05-15 02:01:36.412485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.734 [2024-05-15 02:01:36.421699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.734 [2024-05-15 02:01:36.422077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.422188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.422221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.734 [2024-05-15 02:01:36.422247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.734 [2024-05-15 02:01:36.422489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.734 [2024-05-15 02:01:36.422736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.734 [2024-05-15 02:01:36.422760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.734 [2024-05-15 02:01:36.422776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.734 [2024-05-15 02:01:36.426406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.734 [2024-05-15 02:01:36.435615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.734 [2024-05-15 02:01:36.435995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.436164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.436193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.734 [2024-05-15 02:01:36.436210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.734 [2024-05-15 02:01:36.436461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.734 [2024-05-15 02:01:36.436708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.734 [2024-05-15 02:01:36.436732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.734 [2024-05-15 02:01:36.436748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.734 [2024-05-15 02:01:36.440389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.734 [2024-05-15 02:01:36.449592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.734 [2024-05-15 02:01:36.449990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.450152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.450181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.734 [2024-05-15 02:01:36.450199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.734 [2024-05-15 02:01:36.450450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.734 [2024-05-15 02:01:36.450698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.734 [2024-05-15 02:01:36.450722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.734 [2024-05-15 02:01:36.450739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.734 [2024-05-15 02:01:36.454371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.734 [2024-05-15 02:01:36.463603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.734 [2024-05-15 02:01:36.464003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.464142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.734 [2024-05-15 02:01:36.464171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.734 [2024-05-15 02:01:36.464189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.734 [2024-05-15 02:01:36.464445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.734 [2024-05-15 02:01:36.464692] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.464717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.464733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.468369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.477602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.478004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.478171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.478200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.478225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.478470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.478717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.478741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.478757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.482389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.491598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.491986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.492154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.492183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.492212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.492464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.492712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.492736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.492752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.496389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.505613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.506020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.506159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.506188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.506206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.506458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.506705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.506729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.506745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.510378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.519589] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.520046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.520189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.520226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.520245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.520488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.520734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.520758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.520774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.524407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.533610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.534095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.534250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.534279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.534297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.534539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.534785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.534809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.534826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.538462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.547686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.548085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.548222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.548256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.548275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.548518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.548765] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.548789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.548805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.552440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.561653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.562028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.562202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.562238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.562257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.562499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.562746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.562770] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.562787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.566428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.575640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.576036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.576165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.576195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.576213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.576466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.576713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.576737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.576754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.580390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.589630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.590067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.590230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.590260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.735 [2024-05-15 02:01:36.590284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.735 [2024-05-15 02:01:36.590527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.735 [2024-05-15 02:01:36.590774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.735 [2024-05-15 02:01:36.590799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.735 [2024-05-15 02:01:36.590815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.735 [2024-05-15 02:01:36.594454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.735 [2024-05-15 02:01:36.603661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.735 [2024-05-15 02:01:36.604058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.604203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.735 [2024-05-15 02:01:36.604240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.736 [2024-05-15 02:01:36.604259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.736 [2024-05-15 02:01:36.604502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.736 [2024-05-15 02:01:36.604748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.736 [2024-05-15 02:01:36.604772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.736 [2024-05-15 02:01:36.604789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.736 [2024-05-15 02:01:36.608438] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.736 [2024-05-15 02:01:36.617654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.736 [2024-05-15 02:01:36.618030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.618173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.618202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.736 [2024-05-15 02:01:36.618228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.736 [2024-05-15 02:01:36.618472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.736 [2024-05-15 02:01:36.618719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.736 [2024-05-15 02:01:36.618743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.736 [2024-05-15 02:01:36.618759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.736 [2024-05-15 02:01:36.622407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.736 [2024-05-15 02:01:36.631635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.736 [2024-05-15 02:01:36.632032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.632170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.632199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.736 [2024-05-15 02:01:36.632224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.736 [2024-05-15 02:01:36.632474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.736 [2024-05-15 02:01:36.632721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.736 [2024-05-15 02:01:36.632746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.736 [2024-05-15 02:01:36.632762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.736 [2024-05-15 02:01:36.636402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.736 [2024-05-15 02:01:36.645626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.736 [2024-05-15 02:01:36.646035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.646222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.646252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.736 [2024-05-15 02:01:36.646270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.736 [2024-05-15 02:01:36.646512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.736 [2024-05-15 02:01:36.646761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.736 [2024-05-15 02:01:36.646786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.736 [2024-05-15 02:01:36.646803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.736 [2024-05-15 02:01:36.650439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.736 [2024-05-15 02:01:36.659660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.736 [2024-05-15 02:01:36.660059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.660181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.736 [2024-05-15 02:01:36.660210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.736 [2024-05-15 02:01:36.660242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.736 [2024-05-15 02:01:36.660498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.736 [2024-05-15 02:01:36.660754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.736 [2024-05-15 02:01:36.660780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.736 [2024-05-15 02:01:36.660802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.736 [2024-05-15 02:01:36.664469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.673730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.674138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.674275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.674304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.996 [2024-05-15 02:01:36.674321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.996 [2024-05-15 02:01:36.674564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.996 [2024-05-15 02:01:36.674817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.996 [2024-05-15 02:01:36.674843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.996 [2024-05-15 02:01:36.674859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.996 [2024-05-15 02:01:36.678496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.687708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.688104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.688293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.688321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.996 [2024-05-15 02:01:36.688339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.996 [2024-05-15 02:01:36.688582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.996 [2024-05-15 02:01:36.688830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.996 [2024-05-15 02:01:36.688855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.996 [2024-05-15 02:01:36.688872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.996 [2024-05-15 02:01:36.692509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.701729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.702133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.702312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.702342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.996 [2024-05-15 02:01:36.702360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.996 [2024-05-15 02:01:36.702602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.996 [2024-05-15 02:01:36.702851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.996 [2024-05-15 02:01:36.702876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.996 [2024-05-15 02:01:36.702892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.996 [2024-05-15 02:01:36.706531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.715751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.716150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.716303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.716333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.996 [2024-05-15 02:01:36.716351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.996 [2024-05-15 02:01:36.716594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.996 [2024-05-15 02:01:36.716840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.996 [2024-05-15 02:01:36.716871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.996 [2024-05-15 02:01:36.716888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.996 [2024-05-15 02:01:36.720532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.729756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.730165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.730320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.730348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.996 [2024-05-15 02:01:36.730366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.996 [2024-05-15 02:01:36.730610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.996 [2024-05-15 02:01:36.730858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.996 [2024-05-15 02:01:36.730883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.996 [2024-05-15 02:01:36.730900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.996 [2024-05-15 02:01:36.734547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.743768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.744148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.744317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.744345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.996 [2024-05-15 02:01:36.744363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.996 [2024-05-15 02:01:36.744606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.996 [2024-05-15 02:01:36.744854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.996 [2024-05-15 02:01:36.744879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.996 [2024-05-15 02:01:36.744895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.996 [2024-05-15 02:01:36.748542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.996 [2024-05-15 02:01:36.757932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.996 [2024-05-15 02:01:36.758305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.996 [2024-05-15 02:01:36.758440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.758469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.758487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.758729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.758977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.759001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.759036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.762680] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.771903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.772305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.772457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.772485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.772503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.772745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.772992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.773026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.773041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.776684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.785915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.786299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.786440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.786469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.786497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.786740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.786987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.787012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.787028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.790670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.799932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.800343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.800517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.800547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.800565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.800809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.801055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.801080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.801096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.804739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.813963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.814387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.814554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.814584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.814602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.814845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.815093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.815119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.815135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.818774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.827982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.828389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.828552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.828603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.828622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.828864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.829112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.829137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.829154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.832796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.842018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.842403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.842567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.842597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.842615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.842858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.843105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.843131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.843148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.846792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.856005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.856399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.856546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.856577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.856595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.856838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.857086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.857111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.857128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.860772] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.869983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.870405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.870580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.870608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.870626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.870870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.871117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.871143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.871159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.997 [2024-05-15 02:01:36.874801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.997 [2024-05-15 02:01:36.884006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.997 [2024-05-15 02:01:36.884485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.884652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.997 [2024-05-15 02:01:36.884680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.997 [2024-05-15 02:01:36.884698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.997 [2024-05-15 02:01:36.884940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.997 [2024-05-15 02:01:36.885198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.997 [2024-05-15 02:01:36.885234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.997 [2024-05-15 02:01:36.885251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.998 [2024-05-15 02:01:36.888881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.998 [2024-05-15 02:01:36.898085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.998 [2024-05-15 02:01:36.898549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.998 [2024-05-15 02:01:36.898778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.998 [2024-05-15 02:01:36.898806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.998 [2024-05-15 02:01:36.898824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.998 [2024-05-15 02:01:36.899067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.998 [2024-05-15 02:01:36.899329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.998 [2024-05-15 02:01:36.899354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.998 [2024-05-15 02:01:36.899370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.998 [2024-05-15 02:01:36.903001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:12.998 [2024-05-15 02:01:36.912003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:12.998 [2024-05-15 02:01:36.912428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.998 [2024-05-15 02:01:36.912562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.998 [2024-05-15 02:01:36.912592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:12.998 [2024-05-15 02:01:36.912610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:12.998 [2024-05-15 02:01:36.912853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:12.998 [2024-05-15 02:01:36.913101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:12.998 [2024-05-15 02:01:36.913127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:12.998 [2024-05-15 02:01:36.913143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:12.998 [2024-05-15 02:01:36.916784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:36.926049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:36.926461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.926674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.926702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:36.926721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:36.926963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:36.927211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:36.927249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:36.927266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:36.930902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:36.940133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:36.940618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.940787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.940814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:36.940838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:36.941081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:36.941342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:36.941367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:36.941383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:36.945012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:36.954231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:36.954614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.954752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.954781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:36.954799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:36.955042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:36.955304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:36.955331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:36.955347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:36.958977] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:36.968190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:36.968594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.968756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.968784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:36.968801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:36.969043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:36.969305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:36.969331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:36.969346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:36.972974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:36.982190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:36.982618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.982781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.982830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:36.982849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:36.983098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:36.983360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:36.983386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:36.983402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:36.987034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:36.996253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:36.996730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.996912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:36.996939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:36.996957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:36.997199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:36.997459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:36.997485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:36.997502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:37.001134] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.257 [2024-05-15 02:01:37.010350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.257 [2024-05-15 02:01:37.010784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:37.010920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.257 [2024-05-15 02:01:37.010948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.257 [2024-05-15 02:01:37.010965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.257 [2024-05-15 02:01:37.011207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.257 [2024-05-15 02:01:37.011467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.257 [2024-05-15 02:01:37.011492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.257 [2024-05-15 02:01:37.011508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.257 [2024-05-15 02:01:37.015136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.024354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.024726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.024868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.024897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.024915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.025157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.025421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.025448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.025465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.029094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.038320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.038787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.038924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.038953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.038971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.039214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.039477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.039502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.039517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.043150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.052370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.052745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.052856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.052886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.052904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.053147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.053407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.053433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.053450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.057082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.066308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.066706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.066846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.066874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.066891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.067133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.067394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.067426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.067444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.071077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.080293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.080702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.080852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.080880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.080897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.081140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.081400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.081426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.081442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.085072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.094294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.094693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.094835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.094863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.094881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.095123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.095383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.095409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.095425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.099055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.108280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.108681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.108844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.108872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.108889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.109133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.109394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.109420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.109442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.113072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.122337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.122749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.122914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.122942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.122960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.123202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.123463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.123489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.123505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.127133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.136352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.136843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.137004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.137032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.258 [2024-05-15 02:01:37.137050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.258 [2024-05-15 02:01:37.137306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.258 [2024-05-15 02:01:37.137552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.258 [2024-05-15 02:01:37.137577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.258 [2024-05-15 02:01:37.137593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.258 [2024-05-15 02:01:37.141234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.258 [2024-05-15 02:01:37.150439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.258 [2024-05-15 02:01:37.150937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.258 [2024-05-15 02:01:37.151099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.259 [2024-05-15 02:01:37.151126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.259 [2024-05-15 02:01:37.151144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.259 [2024-05-15 02:01:37.151399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.259 [2024-05-15 02:01:37.151645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.259 [2024-05-15 02:01:37.151670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.259 [2024-05-15 02:01:37.151686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.259 [2024-05-15 02:01:37.155326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.259 [2024-05-15 02:01:37.164530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.259 [2024-05-15 02:01:37.165020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.259 [2024-05-15 02:01:37.165226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.259 [2024-05-15 02:01:37.165255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.259 [2024-05-15 02:01:37.165272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.259 [2024-05-15 02:01:37.165514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.259 [2024-05-15 02:01:37.165762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.259 [2024-05-15 02:01:37.165787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.259 [2024-05-15 02:01:37.165804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.259 [2024-05-15 02:01:37.169443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.259 [2024-05-15 02:01:37.178442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.259 [2024-05-15 02:01:37.178840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.259 [2024-05-15 02:01:37.179060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.259 [2024-05-15 02:01:37.179112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.259 [2024-05-15 02:01:37.179130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.259 [2024-05-15 02:01:37.179387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.259 [2024-05-15 02:01:37.179636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.259 [2024-05-15 02:01:37.179661] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.259 [2024-05-15 02:01:37.179677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.259 [2024-05-15 02:01:37.183344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.192420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.192919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.193086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.193114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.193137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.193395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.193642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.193665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.193680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.197312] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.206327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.206799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.206959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.206986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.207004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.207258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.207504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.207529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.207545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.211171] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.220387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.220858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.221021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.221049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.221067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.221320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.221569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.221594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.221610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.225248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.234476] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.234875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.235037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.235065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.235082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.235335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.235581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.235606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.235622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.239260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.248471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.248948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.249114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.249142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.249159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.249419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.249667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.249693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.249709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.253346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.262562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.263006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.263158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.263186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.263204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.263455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.263701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.263727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.263743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.267376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.276577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.277081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.277251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.277280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.277298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.277540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.277785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.277811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.518 [2024-05-15 02:01:37.277827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.518 [2024-05-15 02:01:37.281465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.518 [2024-05-15 02:01:37.290682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.518 [2024-05-15 02:01:37.291091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.291261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.518 [2024-05-15 02:01:37.291295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.518 [2024-05-15 02:01:37.291314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.518 [2024-05-15 02:01:37.291556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.518 [2024-05-15 02:01:37.291804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.518 [2024-05-15 02:01:37.291830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.291846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.295484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.304700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.305105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.305257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.305286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.305304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.305547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.305793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.305818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.305835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.309469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.318677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.319146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.319305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.319334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.319352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.319594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.319840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.319865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.319882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.323519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.332729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.333212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.333360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.333388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.333412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.333654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.333902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.333927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.333944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.337582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.346796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.347252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.347418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.347446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.347463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.347705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.347953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.347979] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.347995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.351635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.360845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.361256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.361494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.361523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.361541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.361785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.362032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.362057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.362074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.365717] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.374927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.375335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.375454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.375482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.375500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.375747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.375993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.376019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.376035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.379677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.388882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.389282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.389421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.389449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.389467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.389710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.389957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.389983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.390000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.393639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.402841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.403240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.403415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.403442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.403460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.403702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.519 [2024-05-15 02:01:37.403949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.519 [2024-05-15 02:01:37.403975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.519 [2024-05-15 02:01:37.403991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.519 [2024-05-15 02:01:37.407629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.519 [2024-05-15 02:01:37.416836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.519 [2024-05-15 02:01:37.417248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.417455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.519 [2024-05-15 02:01:37.417516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.519 [2024-05-15 02:01:37.417534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.519 [2024-05-15 02:01:37.417776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.520 [2024-05-15 02:01:37.418032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.520 [2024-05-15 02:01:37.418057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.520 [2024-05-15 02:01:37.418074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.520 [2024-05-15 02:01:37.421717] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.520 [2024-05-15 02:01:37.430928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.520 [2024-05-15 02:01:37.431347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.520 [2024-05-15 02:01:37.431541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.520 [2024-05-15 02:01:37.431603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.520 [2024-05-15 02:01:37.431621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.520 [2024-05-15 02:01:37.431864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.520 [2024-05-15 02:01:37.432112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.520 [2024-05-15 02:01:37.432138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.520 [2024-05-15 02:01:37.432155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.520 [2024-05-15 02:01:37.435796] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.520 [2024-05-15 02:01:37.445017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.520 [2024-05-15 02:01:37.445434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.520 [2024-05-15 02:01:37.445572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.520 [2024-05-15 02:01:37.445599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.520 [2024-05-15 02:01:37.445616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.520 [2024-05-15 02:01:37.445858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.520 [2024-05-15 02:01:37.446106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.520 [2024-05-15 02:01:37.446131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.520 [2024-05-15 02:01:37.446148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.779 [2024-05-15 02:01:37.449841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.779 [2024-05-15 02:01:37.459083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.779 [2024-05-15 02:01:37.459471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.779 [2024-05-15 02:01:37.459638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.779 [2024-05-15 02:01:37.459666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.779 [2024-05-15 02:01:37.459683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.779 [2024-05-15 02:01:37.459926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.779 [2024-05-15 02:01:37.460174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.779 [2024-05-15 02:01:37.460205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.779 [2024-05-15 02:01:37.460237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.779 [2024-05-15 02:01:37.463872] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.779 [2024-05-15 02:01:37.473098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.779 [2024-05-15 02:01:37.473491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.779 [2024-05-15 02:01:37.473666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.473695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.473714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.473956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.474203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.474238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.474256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.477923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.487147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.487634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.487800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.487830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.487848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.488103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.488367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.488393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.488409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.492052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.501081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.501488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.501595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.501624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.501641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.501884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.502132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.502156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.502177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.505815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.515034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.515463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.515710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.515770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.515788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.516031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.516292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.516317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.516333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.519971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.528984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.529391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.529554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.529582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.529599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.529842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.530090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.530114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.530130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.533792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.543011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.543432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.543573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.543603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.543620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.543863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.544109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.544133] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.544149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.547795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.557019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.557442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.557638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.557699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.557717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.557959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.558206] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.780 [2024-05-15 02:01:37.558240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.780 [2024-05-15 02:01:37.558264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.780 [2024-05-15 02:01:37.561898] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.780 [2024-05-15 02:01:37.571125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.780 [2024-05-15 02:01:37.571509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.571675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.780 [2024-05-15 02:01:37.571704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.780 [2024-05-15 02:01:37.571722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.780 [2024-05-15 02:01:37.571965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.780 [2024-05-15 02:01:37.572211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.572247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.572264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.575888] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.585096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.585479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.585642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.585671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.585689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.585931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.586177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.586202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.586230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.589866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.599100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.599513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.599723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.599785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.599803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.600046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.600308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.600335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.600352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.603982] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.613225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.613722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.613883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.613911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.613929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.614171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.614427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.614453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.614472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.618110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.627343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.627801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.627982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.628010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.628028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.628284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.628531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.628562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.628579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.632210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.641449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.641914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.642057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.642087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.642106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.642361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.642613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.642638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.642654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.646292] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.655507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.655907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.656059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.656087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.656105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.656359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.656607] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.656633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.656649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.660284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.669499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.669901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.670028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.670055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.670073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.670327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.670573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.670598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.781 [2024-05-15 02:01:37.670614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.781 [2024-05-15 02:01:37.674253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.781 [2024-05-15 02:01:37.683469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.781 [2024-05-15 02:01:37.683856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.683997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.781 [2024-05-15 02:01:37.684027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.781 [2024-05-15 02:01:37.684051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.781 [2024-05-15 02:01:37.684312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.781 [2024-05-15 02:01:37.684564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.781 [2024-05-15 02:01:37.684589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.782 [2024-05-15 02:01:37.684606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.782 [2024-05-15 02:01:37.688245] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 17310 Killed "${NVMF_APP[@]}" "$@" 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:13.782 [2024-05-15 02:01:37.697466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:13.782 [2024-05-15 02:01:37.697850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:13.782 [2024-05-15 02:01:37.698013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.782 [2024-05-15 02:01:37.698042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:13.782 [2024-05-15 02:01:37.698061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:13.782 [2024-05-15 02:01:37.698315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:13.782 [2024-05-15 02:01:37.698563] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:13.782 [2024-05-15 02:01:37.698588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:13.782 [2024-05-15 02:01:37.698605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=18268 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 18268 00:33:13.782 [2024-05-15 02:01:37.702243] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 18268 ']' 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:13.782 02:01:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.042 [2024-05-15 02:01:37.711534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.042 [2024-05-15 02:01:37.711912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-05-15 02:01:37.712045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-05-15 02:01:37.712080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.042 [2024-05-15 02:01:37.712100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.042 [2024-05-15 02:01:37.712353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.042 [2024-05-15 02:01:37.712601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.042 [2024-05-15 02:01:37.712625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.042 [2024-05-15 02:01:37.712642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.042 [2024-05-15 02:01:37.716300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.042 [2024-05-15 02:01:37.725553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.042 [2024-05-15 02:01:37.725964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-05-15 02:01:37.726106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.042 [2024-05-15 02:01:37.726135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.042 [2024-05-15 02:01:37.726153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.042 [2024-05-15 02:01:37.726407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.042 [2024-05-15 02:01:37.726655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.042 [2024-05-15 02:01:37.726680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.042 [2024-05-15 02:01:37.726695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.042 [2024-05-15 02:01:37.730340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.042 [2024-05-15 02:01:37.739591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.739971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.740101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.740130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.740147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.740401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.740649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.740674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.740690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.744330] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.749362] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:14.043 [2024-05-15 02:01:37.749443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.043 [2024-05-15 02:01:37.753138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.753535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.753726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.753752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.753769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.754015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.754268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.754292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.754308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.757468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.766827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.767183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.767301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.767327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.767344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.767577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.767831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.767853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.767867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.771038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.780638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.781022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.781151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.781178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.781195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.781422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.781665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.781686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.781700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.784896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.794110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.794484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.794649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.794676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.794692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.794937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.795154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.795174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.795188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.043 [2024-05-15 02:01:37.798392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.807610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.807990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.808123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.808149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.808166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.808393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.808625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.808646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.808660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.811860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.821207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.821584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.821722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.821748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.821764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.822021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.822259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.822282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.822297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.825539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.834811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.835187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.835332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.835372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.043 [2024-05-15 02:01:37.835390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.043 [2024-05-15 02:01:37.835636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.043 [2024-05-15 02:01:37.835846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.043 [2024-05-15 02:01:37.835866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.043 [2024-05-15 02:01:37.835880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.043 [2024-05-15 02:01:37.837988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:14.043 [2024-05-15 02:01:37.839100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.043 [2024-05-15 02:01:37.848384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.043 [2024-05-15 02:01:37.849050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.849229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.043 [2024-05-15 02:01:37.849267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.849288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.849537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.849772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.849794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.849811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.853060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.044 [2024-05-15 02:01:37.862002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.044 [2024-05-15 02:01:37.862461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.862586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.862613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.862633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.862885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.863097] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.863120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.863137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.866393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.044 [2024-05-15 02:01:37.875597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.044 [2024-05-15 02:01:37.876019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.876160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.876186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.876214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.876446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.876677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.876699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.876715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.879903] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.044 [2024-05-15 02:01:37.889042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.044 [2024-05-15 02:01:37.889542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.889716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.889742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.889762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.890016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.890259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.890298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.890317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.893563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.044 [2024-05-15 02:01:37.902799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.044 [2024-05-15 02:01:37.903309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.903464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.903491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.903511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.903759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.903991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.904014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.904034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.907259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.044 [2024-05-15 02:01:37.916356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.044 [2024-05-15 02:01:37.916769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.916914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.916941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.916957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.917203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.917453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.917477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.917493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.920694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.044 [2024-05-15 02:01:37.929973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.044 [2024-05-15 02:01:37.930383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.930499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.044 [2024-05-15 02:01:37.930525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.044 [2024-05-15 02:01:37.930542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.044 [2024-05-15 02:01:37.930762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.044 [2024-05-15 02:01:37.930986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.044 [2024-05-15 02:01:37.931009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.044 [2024-05-15 02:01:37.931024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.044 [2024-05-15 02:01:37.931552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.044 [2024-05-15 02:01:37.931590] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.044 [2024-05-15 02:01:37.931605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.044 [2024-05-15 02:01:37.931617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.044 [2024-05-15 02:01:37.931628] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.044 [2024-05-15 02:01:37.931692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.044 [2024-05-15 02:01:37.931752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.044 [2024-05-15 02:01:37.931756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.045 [2024-05-15 02:01:37.934359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.045 [2024-05-15 02:01:37.943540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.045 [2024-05-15 02:01:37.944067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-05-15 02:01:37.944242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-05-15 02:01:37.944271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.045 [2024-05-15 02:01:37.944292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.045 [2024-05-15 02:01:37.944535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.045 [2024-05-15 02:01:37.944756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.045 [2024-05-15 02:01:37.944777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.045 [2024-05-15 02:01:37.944795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.045 [2024-05-15 02:01:37.948099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.045 [2024-05-15 02:01:37.957303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.045 [2024-05-15 02:01:37.957867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-05-15 02:01:37.958013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-05-15 02:01:37.958040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.045 [2024-05-15 02:01:37.958062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.045 [2024-05-15 02:01:37.958300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.045 [2024-05-15 02:01:37.958545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.045 [2024-05-15 02:01:37.958568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.045 [2024-05-15 02:01:37.958586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.045 [2024-05-15 02:01:37.961882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.045 [2024-05-15 02:01:37.971156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.045 [2024-05-15 02:01:37.971700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-05-15 02:01:37.971876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.045 [2024-05-15 02:01:37.971903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.045 [2024-05-15 02:01:37.971924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.045 [2024-05-15 02:01:37.972160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.045 [2024-05-15 02:01:37.972402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.045 [2024-05-15 02:01:37.972427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.045 [2024-05-15 02:01:37.972445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.304 [2024-05-15 02:01:37.975795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.304 [2024-05-15 02:01:37.984847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.304 [2024-05-15 02:01:37.985269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:37.985491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:37.985519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.304 [2024-05-15 02:01:37.985540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.304 [2024-05-15 02:01:37.985767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.304 [2024-05-15 02:01:37.985996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.304 [2024-05-15 02:01:37.986019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.304 [2024-05-15 02:01:37.986037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.304 [2024-05-15 02:01:37.989341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.304 [2024-05-15 02:01:37.998531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.304 [2024-05-15 02:01:37.999153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:37.999280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:37.999308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.304 [2024-05-15 02:01:37.999329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.304 [2024-05-15 02:01:37.999588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.304 [2024-05-15 02:01:37.999821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.304 [2024-05-15 02:01:37.999843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.304 [2024-05-15 02:01:37.999861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.304 [2024-05-15 02:01:38.003230] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.304 [2024-05-15 02:01:38.012071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.304 [2024-05-15 02:01:38.012528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:38.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:38.012726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.304 [2024-05-15 02:01:38.012745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.304 [2024-05-15 02:01:38.012981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.304 [2024-05-15 02:01:38.013208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.304 [2024-05-15 02:01:38.013237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.304 [2024-05-15 02:01:38.013254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.304 [2024-05-15 02:01:38.016445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.304 [2024-05-15 02:01:38.025755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.304 [2024-05-15 02:01:38.026113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:38.026254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:38.026281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.304 [2024-05-15 02:01:38.026297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.304 [2024-05-15 02:01:38.026516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.304 [2024-05-15 02:01:38.026747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.304 [2024-05-15 02:01:38.026768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.304 [2024-05-15 02:01:38.026783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.304 [2024-05-15 02:01:38.030066] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.304 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:14.304 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:33:14.304 02:01:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:14.304 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:14.304 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.304 [2024-05-15 02:01:38.039334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.304 [2024-05-15 02:01:38.039700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:38.039832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.304 [2024-05-15 02:01:38.039858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.304 [2024-05-15 02:01:38.039874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.304 [2024-05-15 02:01:38.040107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.304 [2024-05-15 02:01:38.040363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.304 [2024-05-15 02:01:38.040386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.305 [2024-05-15 02:01:38.040402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.305 [2024-05-15 02:01:38.043704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.305 [2024-05-15 02:01:38.053004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.305 [2024-05-15 02:01:38.053366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.053484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.053512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.305 [2024-05-15 02:01:38.053529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.305 [2024-05-15 02:01:38.053777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.305 [2024-05-15 02:01:38.053987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.305 [2024-05-15 02:01:38.054008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.305 [2024-05-15 02:01:38.054021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.305 [2024-05-15 02:01:38.057396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.305 [2024-05-15 02:01:38.062305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.305 [2024-05-15 02:01:38.066560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.305 [2024-05-15 02:01:38.066870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.067019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.067045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.305 [2024-05-15 02:01:38.067062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.305 [2024-05-15 02:01:38.067304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.305 [2024-05-15 02:01:38.067554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.305 [2024-05-15 02:01:38.067576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.305 [2024-05-15 02:01:38.067591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.305 [2024-05-15 02:01:38.070930] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.305 [2024-05-15 02:01:38.080088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.305 [2024-05-15 02:01:38.080443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.080542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.080567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.305 [2024-05-15 02:01:38.080583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.305 [2024-05-15 02:01:38.080801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.305 [2024-05-15 02:01:38.081026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.305 [2024-05-15 02:01:38.081047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.305 [2024-05-15 02:01:38.081061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.305 [2024-05-15 02:01:38.084378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.305 [2024-05-15 02:01:38.093610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.305 [2024-05-15 02:01:38.094227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.094402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.094429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.305 [2024-05-15 02:01:38.094451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.305 [2024-05-15 02:01:38.094708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.305 [2024-05-15 02:01:38.094923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.305 [2024-05-15 02:01:38.094945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.305 [2024-05-15 02:01:38.094963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.305 Malloc0 00:33:14.305 [2024-05-15 02:01:38.098233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.305 [2024-05-15 02:01:38.107331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.305 [2024-05-15 02:01:38.107745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.107888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.305 [2024-05-15 02:01:38.107914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6b0d30 with addr=10.0.0.2, port=4420 00:33:14.305 [2024-05-15 02:01:38.107931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0d30 is same with the state(5) to be set 00:33:14.305 [2024-05-15 02:01:38.108162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b0d30 (9): Bad file descriptor 00:33:14.305 [2024-05-15 02:01:38.108409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:14.305 [2024-05-15 02:01:38.108432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:14.305 [2024-05-15 02:01:38.108447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:14.305 [2024-05-15 02:01:38.111747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:14.305 [2024-05-15 02:01:38.117596] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:14.305 [2024-05-15 02:01:38.117862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.305 [2024-05-15 02:01:38.120915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.305 02:01:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 17593 00:33:14.305 [2024-05-15 02:01:38.196871] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:24.266 00:33:24.266 Latency(us) 00:33:24.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.266 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:24.266 Verification LBA range: start 0x0 length 0x4000 00:33:24.266 Nvme1n1 : 15.01 6637.03 25.93 8412.91 0.00 8479.36 585.58 18932.62 00:33:24.266 =================================================================================================================== 00:33:24.266 Total : 6637.03 25.93 8412.91 0.00 8479.36 585.58 18932.62 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.266 rmmod nvme_tcp 00:33:24.266 rmmod nvme_fabrics 00:33:24.266 rmmod nvme_keyring 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 18268 ']' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 18268 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 18268 ']' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 18268 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 18268 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 18268' 00:33:24.266 killing process with pid 18268 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 18268 00:33:24.266 [2024-05-15 02:01:47.391597] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 18268 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.266 02:01:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.165 02:01:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:26.165 00:33:26.165 real 0m22.808s 00:33:26.165 user 0m59.517s 00:33:26.165 sys 0m4.715s 00:33:26.165 02:01:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:26.165 02:01:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:26.165 ************************************ 00:33:26.165 END TEST nvmf_bdevperf 00:33:26.165 ************************************ 00:33:26.165 02:01:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:26.165 02:01:49 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:26.165 02:01:49 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:26.165 02:01:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.165 ************************************ 00:33:26.165 START TEST nvmf_target_disconnect 00:33:26.165 ************************************ 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:26.165 * Looking for test storage... 00:33:26.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:26.165 02:01:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:28.695 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:28.695 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:28.695 Found net devices under 0000:09:00.0: cvl_0_0 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:28.695 Found net devices under 0000:09:00.1: cvl_0_1 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:28.695 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:28.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:33:28.696 00:33:28.696 --- 10.0.0.2 ping statistics --- 00:33:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.696 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:33:28.696 00:33:28.696 --- 10.0.0.1 ping statistics --- 00:33:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.696 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 ************************************ 00:33:28.696 START TEST nvmf_target_disconnect_tc1 00:33:28.696 ************************************ 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.696 [2024-05-15 02:01:52.405484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-05-15 02:01:52.405651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.696 [2024-05-15 02:01:52.405682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52c520 with addr=10.0.0.2, port=4420 00:33:28.696 [2024-05-15 02:01:52.405724] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:28.696 [2024-05-15 02:01:52.405746] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:28.696 [2024-05-15 02:01:52.405763] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:28.696 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:28.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:28.696 Initializing NVMe Controllers 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:28.696 00:33:28.696 real 0m0.101s 00:33:28.696 user 0m0.039s 00:33:28.696 sys 0m0.061s 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 ************************************ 00:33:28.696 END TEST nvmf_target_disconnect_tc1 00:33:28.696 ************************************ 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 ************************************ 00:33:28.696 START TEST nvmf_target_disconnect_tc2 00:33:28.696 ************************************ 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=21776 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 21776 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 21776 ']' 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:28.696 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.696 [2024-05-15 02:01:52.519028] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:28.696 [2024-05-15 02:01:52.519103] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.696 [2024-05-15 02:01:52.591494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:28.954 [2024-05-15 02:01:52.676765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.954 [2024-05-15 02:01:52.676834] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.954 [2024-05-15 02:01:52.676858] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.954 [2024-05-15 02:01:52.676869] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.954 [2024-05-15 02:01:52.676878] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.954 [2024-05-15 02:01:52.676961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:28.954 [2024-05-15 02:01:52.677060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:28.954 [2024-05-15 02:01:52.677154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:28.954 [2024-05-15 02:01:52.677163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.954 Malloc0 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.954 [2024-05-15 02:01:52.856222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:28.954 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.955 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.955 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.955 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.955 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.955 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.955 [2024-05-15 02:01:52.884223] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:28.955 [2024-05-15 02:01:52.884545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=21841 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:29.212 02:01:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:29.212 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.120 02:01:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 21776 00:33:31.120 02:01:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:31.120 Read completed with error (sct=0, sc=8) 00:33:31.120 starting I/O failed 00:33:31.120 Read completed with error (sct=0, sc=8) 00:33:31.120 starting I/O failed 00:33:31.120 Read completed with error (sct=0, sc=8) 00:33:31.120 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 [2024-05-15 02:01:54.910980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 [2024-05-15 02:01:54.911339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 [2024-05-15 02:01:54.911675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Read completed with error (sct=0, sc=8) 00:33:31.121 starting I/O failed 00:33:31.121 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Read completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Read completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Write completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 Read completed with error (sct=0, sc=8) 00:33:31.122 starting I/O failed 00:33:31.122 [2024-05-15 02:01:54.911964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.122 [2024-05-15 02:01:54.912122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.912277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.912306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.912425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.912537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.912564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.912678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.912803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.912837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.912967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.913097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.913124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.913233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.913352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.913380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.913488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.913599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.913627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.913858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.914191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.914440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.914699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.914898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.915014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.915177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.915207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.915343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.915431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.915457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.915568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.915713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.915741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.915896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.916171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.916408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.916688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.916862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.917023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.917298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.917553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.917774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.917907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.918031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.918290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.918509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.918758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.918921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.919016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.919116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.919141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.122 qpair failed and we were unable to recover it. 00:33:31.122 [2024-05-15 02:01:54.919254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.122 [2024-05-15 02:01:54.919354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.919381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.919476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.919600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.919628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.919768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.919875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.919905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.920033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.920156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.920185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.920319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.920415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.920442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.920696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.920740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.920905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.921232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.921476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.921767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.921897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.922042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.922194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.922230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.922354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.922457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.922491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.922582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.922745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.922773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.922922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.923240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.923466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.923743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.923915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.924126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.924245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.924273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.924403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.924502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.924528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.924649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.924814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.924843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.925015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.925147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.925177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.925314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.925416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.925442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.925584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.925808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.925834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.926037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.926150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.926180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.926329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.926449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.926474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.926824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.926973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.927101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.927395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.927629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.927777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.123 [2024-05-15 02:01:54.927925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.928068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.123 [2024-05-15 02:01:54.928111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.123 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.928204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.928320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.928346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.928443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.928598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.928624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.928751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.928873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.928900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.929014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.929130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.929156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.929262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.929398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.929424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.929574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.929743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.929783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.929913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.930052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.930081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.930226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.930398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.930424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.930616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.930731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.930758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.930864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.931202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.931484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.931739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.931909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.932062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.932299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.932604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.932866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.932985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.933162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.933421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.933693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.933876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.934024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.934314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.934573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.934813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.934960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.935081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.935172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.935197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.935325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.935456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.935493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.935627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.935774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.935817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.935958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.936144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.936175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.936302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.936424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.124 [2024-05-15 02:01:54.936450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.124 qpair failed and we were unable to recover it. 00:33:31.124 [2024-05-15 02:01:54.936544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.936659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.936685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.936807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.936921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.936947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.937074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.937165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.937191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.937326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.937462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.937492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.937644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.937818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.937847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.938007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.938301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.938517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.938844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.938993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.939146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.939269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.939296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.939460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.939587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.939615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.939710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.939828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.939855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.939944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.940184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.940469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.940773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.940935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.941076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.941364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.941610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.941851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.941974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.942141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.942428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.942731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.942909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.942999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.943111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.125 [2024-05-15 02:01:54.943138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.125 qpair failed and we were unable to recover it. 00:33:31.125 [2024-05-15 02:01:54.943290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.943439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.943495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.943631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.943778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.943803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.943942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.944288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.944562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.944825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.944975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.945087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.945437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.945800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.945969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.946080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.946203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.946233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.946380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.946480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.946509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.946652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.946819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.946845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.946946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.947170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.947451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.947750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.947901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.948025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.948150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.948176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.948306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.948428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.948454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.948588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.948741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.948776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.948951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.949318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.949574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.949829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.949950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.950065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.950154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.950180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.950290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.950417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.950444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.950612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.950716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.950751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.950870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.951199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.951458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.126 qpair failed and we were unable to recover it. 00:33:31.126 [2024-05-15 02:01:54.951736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.126 [2024-05-15 02:01:54.951911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.952056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.952287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.952564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.952870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.952991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.953099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.953189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.953213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.953379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.953515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.953541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.953683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.953833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.953859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.953988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.954249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.954502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.954835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.954997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.955109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.955268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.955315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.955443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.955574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.955600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.955734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.955890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.955917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.956072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.956379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.956650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.956901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.956998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.957121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.957434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.957708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.957884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.958012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.958316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.958632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.958878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.958998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.959094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.959195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.959225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.959383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.959519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.959548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.959692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.959826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.959853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.960037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.960156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.127 [2024-05-15 02:01:54.960183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.127 qpair failed and we were unable to recover it. 00:33:31.127 [2024-05-15 02:01:54.960295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.960413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.960439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.960561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.960692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.960719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.960873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.960988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.961165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.961469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.961797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.961915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.962071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.962195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.962225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.962360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.962486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.962516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.962608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.962750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.962776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.962893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.963159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.963425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.963730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.963878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.963993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.964303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.964555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.964807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.964924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.965055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.965178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.965210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.965343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.965446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.965471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.965610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.965707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.965735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.965868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.966174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.966511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.966831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.966981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.967118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.967256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.967284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.967402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.967522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.967549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.967679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.967799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.967825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.967941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.968038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.968081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.968225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.968353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.968379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.128 [2024-05-15 02:01:54.968517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.968659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.128 [2024-05-15 02:01:54.968688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.128 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.968801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.968932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.968961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.969084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.969234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.969261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.969412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.969523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.969552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.969690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.969833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.969859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.969948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.970209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.970453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.970759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.970911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.971017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.971275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.971505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.971812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.971960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.972077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.972325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.972540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.972873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.972985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.973181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.973543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.973844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.973990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.974094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.974241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.974274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.974424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.974559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.974589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.974732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.974857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.974883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.975007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.975156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.975294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.975416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.975444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.975563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.975725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.975755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.975898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.976046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.976073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.976268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.976414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.976440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.129 qpair failed and we were unable to recover it. 00:33:31.129 [2024-05-15 02:01:54.976597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.976732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.129 [2024-05-15 02:01:54.976761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.976903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.977209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.977577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.977871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.977992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.978142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.978390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.978697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.978844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.978990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.979275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.979533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.979774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.979892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.979985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.980263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.980550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.980813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.980950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.981069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.981300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.981573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.981849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.981995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.982141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.982466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.982801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.982930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.983078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.983199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.983236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.983415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.983537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.983564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.983682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.983797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.983823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.983970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.984120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.984146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.130 qpair failed and we were unable to recover it. 00:33:31.130 [2024-05-15 02:01:54.984273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.130 [2024-05-15 02:01:54.984398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.984423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.984535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.984695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.984722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.984858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.984987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.985130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.985406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.985650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.985809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.985919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.986053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.986084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.986257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.986383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.986409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.986531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.986707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.986733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.986848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.987202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.987520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.987850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.987979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.988096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.988222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.988250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.988359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.988467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.988504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.988631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.988755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.988782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.988905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.989276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.989489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.989776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.989948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.990065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.990196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.990233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.990403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.990510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.990539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.990709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.990835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.990862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.990955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.991261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.991580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.991834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.991975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.992076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.992192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.992224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.992385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.992504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.992531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.131 qpair failed and we were unable to recover it. 00:33:31.131 [2024-05-15 02:01:54.992702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.992831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.131 [2024-05-15 02:01:54.992860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.993031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.993153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.993180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.993290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.993433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.993458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.993605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.993736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.993764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.993904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.994214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.994490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.994773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.994946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.995094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.995186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.995241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.995392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.995550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.995581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.995712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.995869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.995899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.996044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.996166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.996192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.996333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.996478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.996504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.996622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.996757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.996786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.996904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.997245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.997535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.997801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.997973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.998077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.998228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.998265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.998413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.998575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.998604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.998751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.998870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.998896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.999064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.999187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.999282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.999460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.999602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.999632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:54.999780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.999897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:54.999923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.000068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.000174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.000205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.000315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.000425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.000453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.000572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.000725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.000751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.000915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.001038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.001065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.001213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.001343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.001372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.001552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.001665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.132 [2024-05-15 02:01:55.001691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.132 qpair failed and we were unable to recover it. 00:33:31.132 [2024-05-15 02:01:55.001883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.001985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.002131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.002442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.002690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.002843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.002986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.003305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.003560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.003837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.003997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.004169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.004282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.004309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.004499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.004621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.004649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.004750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.004907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.004934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.005056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.005188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.005222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.005326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.005445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.005488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.005626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.005752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.005781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.005931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.006255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.006522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.006805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.006981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.007095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.007199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.007231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.007394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.007503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.007531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.007657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.007780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.007806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.007926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.008265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.008532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.008822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.008979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.009115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.009242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.009269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.009422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.009565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.009591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.009712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.009815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.009844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.009982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.010119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.010145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.133 qpair failed and we were unable to recover it. 00:33:31.133 [2024-05-15 02:01:55.010280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.010373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.133 [2024-05-15 02:01:55.010399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.010500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.010622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.010648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.010766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.010886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.010911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.011030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.011274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.011510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.011797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.011923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.012094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.012229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.012259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.012392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.012517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.012550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.012694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.012853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.012895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.013023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.013176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.013202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.013327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.013453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.013480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.013608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.013753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.013779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.013944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.014099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.014127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.014288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.014450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.014478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.014620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.014763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.014789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.014915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.015214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.015512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.015810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.015980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.016149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.016469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.016774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.016947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.017077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.017211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.017247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.017371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.017490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.017517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.134 [2024-05-15 02:01:55.017663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.017783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.134 [2024-05-15 02:01:55.017809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.134 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.017911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.018199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.018559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.018851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.018977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.019128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.019428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.019758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.019918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.020076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.020229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.020272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.020439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.020604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.020634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.020794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.020888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.020915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.021063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.021185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.021212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.021341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.021472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.021501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.021635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.021794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.021824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.021977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.022104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.022130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.022257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.022400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.022429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.135 [2024-05-15 02:01:55.022573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.022693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.135 [2024-05-15 02:01:55.022719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.135 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.022816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.022915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.022941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.023084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.023247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.023274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.023358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.023509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.023536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.023660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.023783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.023809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.023910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.024135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.024451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.024689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.024839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.024953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.025295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.025508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.025806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.025952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.026111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.026277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.026306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.026438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.026574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.026602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.026730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.026898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.026923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.027071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.027194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.027240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.027395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.027582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.027607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.027699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.027806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.027840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.027994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.028308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.028581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.028839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.028980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.029066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.029165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.029192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.029342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.029477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.029505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.029656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.029777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.029803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.029956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.030257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.030541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.030768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.030889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.136 qpair failed and we were unable to recover it. 00:33:31.136 [2024-05-15 02:01:55.031008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.136 [2024-05-15 02:01:55.031153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.031179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.031311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.031433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.031460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.031627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.031758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.031786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.031935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.032244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.032488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.032810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.032957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.033107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.033197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.033230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.033378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.033511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.033540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.033660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.033816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.033844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.034011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.034236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.034514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.034786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.034931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.035073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.035206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.035245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.035371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.035505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.035532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.035665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.035814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.035840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.035997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.036317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.036575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.036851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.036998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.037174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.037300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.037327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.037417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.037532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.037558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.037689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.037777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.037803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.037932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.038247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.038501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.038775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.038921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.039048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.039165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.039191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.137 [2024-05-15 02:01:55.039300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.039407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.137 [2024-05-15 02:01:55.039434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.137 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.039573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.039679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.039710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.039879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.039971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.039998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.040146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.040286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.040314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.040466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.040624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.040653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.040797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.040954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.040983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.041108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.041227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.041257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.041426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.041550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.041575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.041699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.041825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.041851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.041978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.042084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.042125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.042248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.042374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.042405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.138 qpair failed and we were unable to recover it. 00:33:31.138 [2024-05-15 02:01:55.042541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.042663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.138 [2024-05-15 02:01:55.042689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.042794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.042897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.042923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.043011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.043250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.043532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.043828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.043963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.044154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.044309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.044337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.044470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.044606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.044632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.044792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.044961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.044987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.045110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.045353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.045595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.045856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.045979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.046119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.046459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.046702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.046872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.047003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.047293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.047566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.415 qpair failed and we were unable to recover it. 00:33:31.415 [2024-05-15 02:01:55.047782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.415 [2024-05-15 02:01:55.047898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.048050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.048257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.048536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.048845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.048988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.049184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.049498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.049788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.049925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.050065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.050225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.050269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.050407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.050567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.050596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.050726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.050867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.050893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.051019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.051169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.051196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.051368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.051504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.051533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.051666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.051797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.051827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.051976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.052267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.052566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.052829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.052970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.053118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.053290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.053316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.053439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.053532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.053560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.053691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.053838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.053863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.054031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.054122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.054149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.054271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.054420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.054449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.054607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.054730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.054771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.054948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.055222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.055496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.055721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.055837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.055963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.056067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.056113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.416 [2024-05-15 02:01:55.056297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.056420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.416 [2024-05-15 02:01:55.056464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.416 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.056633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.056748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.056774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.056870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.057190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.057469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.057761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.057910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.058055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.058205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.058241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.058385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.058525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.058553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.058684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.058794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.058824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.058966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.059270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.059535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.059773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.059925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.060031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.060202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.060241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.060402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.060516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.060545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.060714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.060835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.060861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.061037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.061315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.061587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.061872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.061986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.062112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.062374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.062614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.062729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.062888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.063194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.063522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.063785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.063934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.064057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.064151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.064179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.064322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.064415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.064442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.064601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.064750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.064776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.417 qpair failed and we were unable to recover it. 00:33:31.417 [2024-05-15 02:01:55.064926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.417 [2024-05-15 02:01:55.065048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.065213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.065549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.065847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.065995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.066098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.066254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.066284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.066410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.066560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.066586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.066707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.066833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.066859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.067001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.067161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.067190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.067327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.067438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.067469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.067643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.067765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.067791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.067935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.068171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.068439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.068733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.068899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.069054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.069198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.069231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.069345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.069436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.069462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.069642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.069764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.069791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.069958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.070197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.070504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.070797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.070923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.071040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.071160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.071186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.071360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.071475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.071503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.071615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.071732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.071774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.071902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.072159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.072483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.072786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.072957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.073103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.073241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.073270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.073385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.073541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.418 [2024-05-15 02:01:55.073570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.418 qpair failed and we were unable to recover it. 00:33:31.418 [2024-05-15 02:01:55.073713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.073837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.073863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.074010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.074176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.074206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.074359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.074468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.074497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.074619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.074740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.074769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.074936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.075199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.075494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.075847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.075981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.076141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.076455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.076762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.076935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.077053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.077197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.077230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.077376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.077501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.077527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.077676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.077815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.077843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.077983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.078294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.078572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.078854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.078979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.079133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.079410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.079713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.079855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.079980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.080125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.080152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.080344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.080465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.080490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.419 qpair failed and we were unable to recover it. 00:33:31.419 [2024-05-15 02:01:55.080592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.080751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.419 [2024-05-15 02:01:55.080780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.080907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.081178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.081421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.081760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.081938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.082081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.082234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.082264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.082370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.082475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.082503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.082657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.082769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.082795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.082932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.083259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.083555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.083853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.083981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.084161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.084465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.084678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.084828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.084957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.085233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.085500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.085724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.085841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.085980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.086104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.086131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.086286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.086421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.086465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.086587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.086727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.086757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.086924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.087252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.087542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.087822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.087992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.088171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.088293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.088319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.088440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.088584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.088614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.088768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.088891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.088918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.420 [2024-05-15 02:01:55.089083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.089241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.420 [2024-05-15 02:01:55.089269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.420 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.089367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.089479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.089507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.089630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.089756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.089786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.089910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.090230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.090570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.090840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.090987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.091120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.091451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.091803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.091991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.092125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.092287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.092318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.092483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.092586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.092613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.092770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.092901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.092934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.093064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.093200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.093236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.093381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.093501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.093528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.093673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.093802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.093832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.093975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.094273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.094529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.094825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.094973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.095077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.095173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.095199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.095332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.095435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.095460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.095560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.095719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.095747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.095877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.096178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.096481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.096722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.096891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.097034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.097133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.097162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.097282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.097443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.097470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.097617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.097725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.421 [2024-05-15 02:01:55.097751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.421 qpair failed and we were unable to recover it. 00:33:31.421 [2024-05-15 02:01:55.097897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a0f0 is same with the state(5) to be set 00:33:31.421 [2024-05-15 02:01:55.098082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.098261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.098294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.098415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.098563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.098590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.098769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.098916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.098949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.099087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.099184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.099212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.099348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.099472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.099500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.099694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.099950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.099981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.100156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.100288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.100318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.100475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.100625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.100653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.100919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.101289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.101604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.101856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.101979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.102079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.102171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.102223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.102388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.102540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.102567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.102667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.102822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.102850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.103039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.103163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.103190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.103310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.103430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.103457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.103563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.103726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.103756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.103931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.104206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.104443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.104714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.104881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.105047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.105161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.105188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.105315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.105442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.105470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.105614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.105727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.105753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.105901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.106227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.106449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.106712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.422 [2024-05-15 02:01:55.106836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.422 qpair failed and we were unable to recover it. 00:33:31.422 [2024-05-15 02:01:55.106958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.107176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.107436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.107757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.107897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.107993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.108168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.108197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.108351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.108470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.108497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.108616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.108774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.108803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.108962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.109258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.109522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.109864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.109986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.110132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.110385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.110712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.110876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.111024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.111173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.111199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.111342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.111483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.111510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.111676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.111826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.111852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.111958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.112200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.112484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.112764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.112892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.113041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.113206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.113244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.113390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.113525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.113554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.113672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.113817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.113843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.114005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.114342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.114569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.114805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.114979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.115085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.115230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.115257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.423 qpair failed and we were unable to recover it. 00:33:31.423 [2024-05-15 02:01:55.115349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.423 [2024-05-15 02:01:55.115464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.115490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.115606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.115728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.115754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.115879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.115976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.116095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.116423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.116728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.116934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.117035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.117192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.117225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.117341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.117442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.117484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.117609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.117751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.117779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.117927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.118270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.118543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.118841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.118997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.119024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.119140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.119280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.119309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.119471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.119671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.119739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.119880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.120175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.120500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.120794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.120941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.121099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.121206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.121244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.121355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.121485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.121515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.121667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.121788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.121816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.121936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.122103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.122132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.122267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.122367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.122396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.122570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.122742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.122771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.122944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.123116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.123141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.424 qpair failed and we were unable to recover it. 00:33:31.424 [2024-05-15 02:01:55.123273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.424 [2024-05-15 02:01:55.123424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.123453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.123592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.123691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.123717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.123829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.123954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.123980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.124148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.124271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.124298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.124391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.124513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.124540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.124644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.124749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.124777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.124926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.125259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.125520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.125806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.125936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.126088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.126181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.126208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.126321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.126488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.126517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.126644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.126770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.126799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.126969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.127119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.127145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.127290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.127462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.127489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.127632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.127769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.127800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.127947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.128258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.128514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.128779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.128929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.129045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.129186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.129213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.129320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.129463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.129489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.129641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.129791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.129834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.129947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.130281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.130529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.130767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.130905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.131007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.131120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.131146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.131267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.131361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.131387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.425 qpair failed and we were unable to recover it. 00:33:31.425 [2024-05-15 02:01:55.131485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.131633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.425 [2024-05-15 02:01:55.131659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.131795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.131925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.131960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.132121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.132267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.132313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.132420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.132553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.132582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.132718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.132842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.132871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.132984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.133264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.133505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.133847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.133993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.134166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.134480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.134742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.134914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.135107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.135253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.135281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.135402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.135546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.135575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.135749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.135847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.135873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.136042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.136184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.136210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.136368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.136533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.136561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.136707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.136808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.136834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.136929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.137080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.137107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.137256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.137385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.137413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.137557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.137680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.137706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.137864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.138187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.138465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.138700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.138869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.138977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.139259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.139616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.139834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.139961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.140060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.140174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.140200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.426 qpair failed and we were unable to recover it. 00:33:31.426 [2024-05-15 02:01:55.140327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.426 [2024-05-15 02:01:55.140496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.140526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.140660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.140827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.140853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.140985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.141354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.141583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.141866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.141980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.142137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.142456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.142718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.142889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.143039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.143148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.143177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.143312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.143443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.143472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.143635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.143751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.143777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.143923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.144280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.144565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.144837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.144947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.145103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.145228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.145254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.145379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.145527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.145570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.145675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.145811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.145841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.145973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.146239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.146518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.146837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.146983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.147178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.147514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.147796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.147980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.148109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.148244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.148287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.148408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.148560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.148586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.148734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.148897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.148926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.149062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.149187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.427 [2024-05-15 02:01:55.149214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.427 qpair failed and we were unable to recover it. 00:33:31.427 [2024-05-15 02:01:55.149347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.149481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.149510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.149654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.149785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.149826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.149949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.150251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.150536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.150859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.150980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.151122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.151462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.151730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.151876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.152024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.152317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.152573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.152893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.152997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.153026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.153165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.153307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.153334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.153491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.153653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.153682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.153814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.153974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.154108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.154411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.154708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.154897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.154999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.155131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.155160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.155307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.155454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.155481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.155654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.155810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.155839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.155970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.156099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.156127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.156272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.156397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.156423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.156559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.156717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.156746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.428 [2024-05-15 02:01:55.156918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.157044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.428 [2024-05-15 02:01:55.157070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.428 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.157191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.157296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.157323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.157445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.157588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.157615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.157747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.157898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.157925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.158077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.158228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.158255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.158407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.158577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.158606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.158715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.158819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.158848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.159013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.159158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.159185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.159329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.159471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.159501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.159635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.159774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.159804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.159969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.160266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.160610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.160852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.160999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.161188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.161475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.161781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.161930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.162076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.162222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.162252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.162411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.162559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.162585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.162735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.162840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.162866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.162987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.163298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.163527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.163829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.163955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.164073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.164209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.164262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.164414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.164539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.164566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.164696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.164791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.164835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.164976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.165257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.165595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.429 qpair failed and we were unable to recover it. 00:33:31.429 [2024-05-15 02:01:55.165868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.429 [2024-05-15 02:01:55.165960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.165988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.166093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.166213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.166246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.166358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.166491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.166520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.166652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.166819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.166848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.167000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.167090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.167116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.167363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.167534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.167563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.167668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.167807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.167836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.167976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.168099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.168126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.168247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.168411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.168440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.168544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.168676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.168705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.168867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.169162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.169516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.169802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.169942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.170084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.170228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.170257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.170397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.170539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.170568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.170751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.170879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.170921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.171055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.171208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.171243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.171337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.171480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.171509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.171645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.171772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.171798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.171902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.172182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.172459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.172679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.172829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.172935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.173224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.173447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.173702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.173818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.173943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.174097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.174140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.174308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.174446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.174475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.430 qpair failed and we were unable to recover it. 00:33:31.430 [2024-05-15 02:01:55.174657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.430 [2024-05-15 02:01:55.174747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.174773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.174897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.175223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.175473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.175804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.175946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.176094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.176242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.176273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.176418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.176565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.176592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.176681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.176804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.176830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.176915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.177197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.177464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.177816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.177961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.178086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.178324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.178574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.178892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.178995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.179139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.179444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.179744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.179884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.180006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.180256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.180548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.180787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.180921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.181020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.181152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.181181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.181335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.181460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.181488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.181586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.181700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.181727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.181858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.182164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.182477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.431 qpair failed and we were unable to recover it. 00:33:31.431 [2024-05-15 02:01:55.182790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.431 [2024-05-15 02:01:55.182953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.183069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.183302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.183594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.183838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.183993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.184107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.184256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.184299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.184428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.184543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.184572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.184712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.184848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.184877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.185022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.185284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.185527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.185780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.185930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.186043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.186148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.186177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.186298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.186423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.186449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.186594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.186717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.186743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.186866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.187178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.187451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.187736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.187867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.188025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.188288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.188555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.188813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.188950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.189071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.189184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.189211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.189339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.189475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.189503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.189610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.189744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.189774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.189889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.190174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.190448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.190722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.190862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.432 qpair failed and we were unable to recover it. 00:33:31.432 [2024-05-15 02:01:55.190971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.432 [2024-05-15 02:01:55.191095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.191298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.191530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.191739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.191909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.192026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.192197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.192240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.192364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.192465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.192491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.192597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.192742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.192768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.192889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.193171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.193465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.193726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.193892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.194040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.194344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.194564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.194772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.194883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.194983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.195273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.195519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.195812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.195952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.196054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.196188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.196223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.196393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.196516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.196542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.196663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.196798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.196831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.196972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.197256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.197486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.197710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.433 [2024-05-15 02:01:55.197832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.433 qpair failed and we were unable to recover it. 00:33:31.433 [2024-05-15 02:01:55.197930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.198227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.198504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.198778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.198930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.199042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.199286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.199530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.199758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.199896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.200002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.200162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.200190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.200347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.200500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.200529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.200661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.200795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.200823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.200945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.201253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.201502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.201752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.201905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.202052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.202201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.202233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.202347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.202445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.202476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.202679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.202775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.202801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.202948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.203195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.203450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.203711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.203838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.203932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.204192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.204475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.204751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.204872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.205000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.205271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.205559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.434 qpair failed and we were unable to recover it. 00:33:31.434 [2024-05-15 02:01:55.205855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.434 [2024-05-15 02:01:55.205995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.206136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.206407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.206697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.206866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.207002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.207286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.207530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.207808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.207975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.208086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.208227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.208257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.208399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.208533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.208564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.208695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.208797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.208826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.208938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.209094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.209121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.209213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.209351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.209378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.209524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.209659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.209705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.209869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.210205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.210478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.210742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.210915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.211031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.211164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.211194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.211349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.211444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.211479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.211638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.211746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.211777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.211900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.212180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.212450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.212749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.212933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.213043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.213152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.213183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.213304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.213453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.213485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.213624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.213766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.213795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.213910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.214038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.214067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.435 [2024-05-15 02:01:55.214178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.214349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.435 [2024-05-15 02:01:55.214377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.435 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.214472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.214647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.214676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.214786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.214893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.214923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.215025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.215156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.215185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.215354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.215486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.215531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.215631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.215732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.215761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.215917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.216196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.216466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.216777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.216947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.217082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.217225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.217270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.217368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.217469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.217505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.217622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.217747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.217776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.217906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.218162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.218404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.218674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.218808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.218916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.219179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.219451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.219705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.219853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.219954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.220306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.220569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.220845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.220974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.221158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.221407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.221669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.221797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.221940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.222072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.222098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.222246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.222353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.436 [2024-05-15 02:01:55.222380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.436 qpair failed and we were unable to recover it. 00:33:31.436 [2024-05-15 02:01:55.222506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.222629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.222656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.222762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.222886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.222913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.223009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.223266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.223525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.223762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.223938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.224031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.224263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.224554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.224849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.224967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.225071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.225326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.225586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.225833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.225953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.226057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.226307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.226564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.226775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.226893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.226986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.227225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.227471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.227698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.227823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.227928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.228170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.228454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.437 qpair failed and we were unable to recover it. 00:33:31.437 [2024-05-15 02:01:55.228762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.437 [2024-05-15 02:01:55.228876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.228904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.229029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.229155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.229181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.229313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.229417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.229446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.229548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.229689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.229740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.229851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.230138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.230419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.230681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.230790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.230885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.231174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.231409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.231656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.231904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.231998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.232122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.232373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.232665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.232823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.232947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.233172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.233419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.233663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.233788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.233909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.234162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.234429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.234729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.234856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.235064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.235323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.235576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.235824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.235953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.236048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.236168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.236195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.438 [2024-05-15 02:01:55.236330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.236426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.438 [2024-05-15 02:01:55.236453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.438 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.236549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.236671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.236697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.236828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.236955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.236981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.237085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.237312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.237631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.237859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.237982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.238154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.238451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.238754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.238864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.238987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.239203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.239441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.239671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.239825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.239922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.240176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.240423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.240702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.240826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.240927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.241172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.241452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.241657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.241866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.241993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.242137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.242359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.242694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.242825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.242953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.243176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.243427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.243669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.243788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.439 qpair failed and we were unable to recover it. 00:33:31.439 [2024-05-15 02:01:55.243915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.439 [2024-05-15 02:01:55.244035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.244165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.244423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.244677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.244806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.244919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.245148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.245401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.245626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.245899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.245999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.246154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.246461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.246695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.246872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.246978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.247263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.247547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.247778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.247932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.248056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.248291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.248540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.248790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.248915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.249005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.249237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.249507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.249726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.249846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.249963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.250233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.250488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.250770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.250922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.251075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.251180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.251224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.440 qpair failed and we were unable to recover it. 00:33:31.440 [2024-05-15 02:01:55.251349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.440 [2024-05-15 02:01:55.251486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.251533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.251657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.251804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.251829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.251928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.252185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.252469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.252805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.252983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.253181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.253299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.253327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.253444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.253609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.253638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.253782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.253903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.253929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.254055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.254172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.254198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.254338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.254536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.254580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.254674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.254822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.254847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.254955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.255175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.255529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.255758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.255902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.256019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.256126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.256153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.256271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.256372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.256398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.256541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.256742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.256768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.256888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.257034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.257061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.257279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.257470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.257509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.257602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.257732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.257758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.257853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.258199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.258446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.258803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.258927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.259050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.259248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.259275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.259416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.259582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.259611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.259722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.259831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.259859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.441 [2024-05-15 02:01:55.259999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.260120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.441 [2024-05-15 02:01:55.260147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.441 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.260288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.260404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.260430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.260559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.260651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.260682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.260809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.260912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.260938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.261060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.261158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.261185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.261326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.261454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.261510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.261606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.261755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.261781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.261902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.262133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.262372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.262601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.262833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.262979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.263107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.263381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.263652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.263867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.263983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.264096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.264399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.264641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.264883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.264985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.265111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.265355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.265623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.265842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.265992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.266117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.266238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.266265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.266365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.266457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.266482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.442 [2024-05-15 02:01:55.266626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.266723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.442 [2024-05-15 02:01:55.266749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.442 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.266848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.266973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.267101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.267380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.267662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.267781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.267930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.268205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.268444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.268720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.268835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.268958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.269211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.269536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.269794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.269918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.270067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.270163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.270189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.270332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.270456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.270484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.270632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.270757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.270783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.270904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.271151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.271395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.271720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.271837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.271926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.272142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.272398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.272673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.272824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.272919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.273171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.273436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.273706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.273856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.273975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.274068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.274095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.274200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.274304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.274330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.443 qpair failed and we were unable to recover it. 00:33:31.443 [2024-05-15 02:01:55.274444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.443 [2024-05-15 02:01:55.274547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.274572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.274693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.274816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.274841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.274970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.275225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.275501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.275768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.275890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.275987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.276239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.276533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.276790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.276908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.277011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.277257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.277487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.277702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.277875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.277976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.278191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.278444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.278699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.278829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.278952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.279196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.279492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.279700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.279838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.279937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.280233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.280487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.280737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.280858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.280972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.281233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.281482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.444 [2024-05-15 02:01:55.281749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.444 [2024-05-15 02:01:55.281873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.444 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.281975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.282239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.282469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.282703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.282855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.282951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.283191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.283482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.283725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.283869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.283991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.284211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.284471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.284723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.284844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.284951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.285226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.285448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.285687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.285811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.285903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.286174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.286398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.286616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.286855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.286977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.287081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.287383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.287602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.287894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.287992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.288139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.288388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.288613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.288783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.288933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.289035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.445 [2024-05-15 02:01:55.289060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.445 qpair failed and we were unable to recover it. 00:33:31.445 [2024-05-15 02:01:55.289155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.289372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.289600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.289846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.289984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.290082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.290207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.290256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.290392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.290496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.290527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.290631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.290766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.290795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.290900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.291162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.291435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.291738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.291865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.291980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.292231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.292483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.292781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.292936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.293052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.293172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.293200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.293353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.293477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.293503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.293623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.293738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.293768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.293933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.294202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.294475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.294768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.294919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.295047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.295163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.295191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.295315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.295426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.295453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.295596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.295726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.295754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.446 qpair failed and we were unable to recover it. 00:33:31.446 [2024-05-15 02:01:55.295873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.446 [2024-05-15 02:01:55.296009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.296147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.296387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.296631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.296865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.296996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.297160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.297398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.297715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.297878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.298001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.298318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.298560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.298809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.298983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.299101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.299238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.299282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.299405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.299529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.299558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.299683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.299829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.299857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.299956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.300242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.300502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.300758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.300922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.301018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.301303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.301545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.301855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.301994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.302106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.302211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.302245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.302341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.302464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.302490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.302630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.302737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.302765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.302883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.303225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.303511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.303795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.303940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.304077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.304241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.304267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.304365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.304458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.447 [2024-05-15 02:01:55.304484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.447 qpair failed and we were unable to recover it. 00:33:31.447 [2024-05-15 02:01:55.304619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.304778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.304806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.304936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.305184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.305436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.305641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.305818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.305953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.306205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.306489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.306721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.306877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.306997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.307267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.307500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.307772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.307891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.307979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.308247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.308473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.308757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.308890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.308995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.309342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.309585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.309801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.309931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.310031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.310300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.310540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.310852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.310990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.311136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.311386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.311601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.311841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.311978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.312086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.312251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.312281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.312415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.312535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.312560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.448 qpair failed and we were unable to recover it. 00:33:31.448 [2024-05-15 02:01:55.312660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.312819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.448 [2024-05-15 02:01:55.312845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.312988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.313146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.313174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.313322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.313448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.313474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.313586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.313718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.313747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.313904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.314160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.314453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.314701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.314872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.314971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.315182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.315399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.315656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.315828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.315938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.316213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.316436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.316690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.316861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.316987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.317241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.317539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.317812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.317972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.318120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.318254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.318281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.318379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.318511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.318540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.318664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.318804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.318830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.318981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.319129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.319154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.319309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.319430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.319457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.319582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.319721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.319749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.319924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.320226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.320474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.320787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.320935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.321050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.321194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.321250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.321363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.321491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.321520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.449 qpair failed and we were unable to recover it. 00:33:31.449 [2024-05-15 02:01:55.321652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.449 [2024-05-15 02:01:55.321768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.321794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.321914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.322200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.322483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.322701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.322817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.322963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.323285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.323544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.323837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.323980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.324123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.324212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.324243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.324393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.324528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.324554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.324651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.324770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.324796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.324887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.325158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.325490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.325699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.325820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.325916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.326188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.326530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.326817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.326994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.450 qpair failed and we were unable to recover it. 00:33:31.450 [2024-05-15 02:01:55.327127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.327232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.450 [2024-05-15 02:01:55.327259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.327358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.327456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.327482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.327601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.327697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.327725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.327815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.327934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.327978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.328111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.328266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.328293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.328407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.328508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.328536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.328696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.328803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.328831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.328977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.329133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.730 [2024-05-15 02:01:55.329159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.730 qpair failed and we were unable to recover it. 00:33:31.730 [2024-05-15 02:01:55.329291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.329412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.329441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.329582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.329691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.329721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.329870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.329990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.330113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.330385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.330657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.330804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.330951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.331295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.331548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.331840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.331981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.332148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.332285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.332314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.332460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.332580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.332606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.332727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.332816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.332860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.332991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.333310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.333560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.333816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.333949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.334104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.334226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.334253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.334438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.334559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.334586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.334706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.334844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.334873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.335008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.335376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.335604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.335837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.335956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.336115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.336376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.336596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.336839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.336958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.731 [2024-05-15 02:01:55.337056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.337171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.731 [2024-05-15 02:01:55.337199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.731 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.337323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.337465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.337491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.337642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.337804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.337830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.337920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.338158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.338467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.338812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.338932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.339047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.339364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.339639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.339882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.339997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.340094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.340222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.340249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.340344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.340462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.340491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.340630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.340774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.340799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.340918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.341263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.341535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.341824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.341983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.342114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.342261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.342291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.342461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.342564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.342590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.342712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.342810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.342839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.343011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.343238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.343509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.343831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.343937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.344034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.344306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.344543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.344821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.344973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.732 [2024-05-15 02:01:55.345116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.345221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.732 [2024-05-15 02:01:55.345251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.732 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.345417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.345517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.345547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.345681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.345780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.345806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.345939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.346272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.346568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.346871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.346996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.347175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.347454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.347694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.347847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.348004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.348141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.348169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.348338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.348465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.348491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.348652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.348757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.348786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.348918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.349190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.349479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.349765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.349949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.350090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.350180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.350205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.350331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.350447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.350473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.350592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.350765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.350799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.350945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.351297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.351521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.351776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.351922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.352092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.352223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.352253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.352409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.352559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.352585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.352674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.352819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.352845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.352993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.353116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.353142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.353316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.353439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.353468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.353583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.353719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.733 [2024-05-15 02:01:55.353749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.733 qpair failed and we were unable to recover it. 00:33:31.733 [2024-05-15 02:01:55.353865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.354164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.354440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.354686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.354819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.354953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.355083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.355112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.355278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.355379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.355404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.355572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.355708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.355736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.355877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.356138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.356460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.356764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.356924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.357085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.357175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.357201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.357349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.357497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.357523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.357613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.357723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.357752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.357893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.358191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.358436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.358697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.358869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.359019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.359150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.359178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.359323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.359423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.359448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.359575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.359694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.359719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.359895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.360226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.360474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.360735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.360907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.361027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.361168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.361196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.361369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.361513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.361539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.361680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.361819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.361849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.361984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.362132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.734 [2024-05-15 02:01:55.362158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.734 qpair failed and we were unable to recover it. 00:33:31.734 [2024-05-15 02:01:55.362254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.362349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.362375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.362479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.362600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.362628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.362809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.362935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.362961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.363081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.363227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.363253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.363402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.363505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.363534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.363660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.363756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.363784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.363951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.364271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.364495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.364794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.364940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.365081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.365184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.365212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.365389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.365478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.365504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.365633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.365754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.365780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.365919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.366212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.366507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.366855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.366984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.367152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.367425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.367765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.367925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.368085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.368223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.368252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.368374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.368478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.368505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.368651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.368817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.368845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.735 qpair failed and we were unable to recover it. 00:33:31.735 [2024-05-15 02:01:55.368974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.735 [2024-05-15 02:01:55.369137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.369166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.369270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.369375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.369401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.369549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.369719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.369744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.369862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.369983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.370139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.370406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.370691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.370850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.371000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.371143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.371168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.371326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.371477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.371503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.371644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.371798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.371826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.371987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.372252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.372517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.372830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.372977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.373117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.373256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.373285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.373415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.373513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.373539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.373660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.373780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.373806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.373919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.374211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.374510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.374753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.374919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.375026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.375192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.375222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.375343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.375466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.375492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.375665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.375791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.375819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.375949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.376250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.376514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.376768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.376933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.377079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.377242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.736 [2024-05-15 02:01:55.377269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.736 qpair failed and we were unable to recover it. 00:33:31.736 [2024-05-15 02:01:55.377423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.377534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.377563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.377702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.377837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.377866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.377987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.378133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.378159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.378325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.378450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.378475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.378611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.378752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.378780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.378924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.379188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.379517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.379843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.379991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.380194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.380325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.380351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.380489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.380641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.380667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.380791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.380913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.380939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.381057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.381192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.381244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.381408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.381564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.381593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.381732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.381830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.381857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.382011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.382171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.382200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.382342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.382470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.382499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.382673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.382787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.382813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.382957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.383198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.383485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.383795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.383967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.384088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.384191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.384242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.384374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.384497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.384523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.384690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.384791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.384819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.384952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.385251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.385520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.385819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.737 [2024-05-15 02:01:55.385974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.737 qpair failed and we were unable to recover it. 00:33:31.737 [2024-05-15 02:01:55.386110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.386237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.386263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.386412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.386573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.386601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.386741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.386874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.386903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.387038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.387165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.387190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.387293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.387420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.387463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.387602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.387765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.387793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.387942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.388158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.388516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.388829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.388951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.389064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.389193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.389244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.389386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.389526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.389555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.389717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.389843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.389868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.390014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.390153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.390182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.390321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.390468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.390494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.390619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.390735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.390761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.390923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.391211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.391521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.391769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.391936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.392104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.392226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.392253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.392379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.392503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.392529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.392655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.392756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.392785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.392893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.393176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.393437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.393764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.393894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.394042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.394172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.394198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.394296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.394460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.394488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.738 qpair failed and we were unable to recover it. 00:33:31.738 [2024-05-15 02:01:55.394587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.738 [2024-05-15 02:01:55.394764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.394790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.394904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.395199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.395505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.395806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.395975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.396105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.396269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.396298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.396429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.396586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.396614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.396756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.396904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.396930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.397028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.397300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.397538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.397816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.397967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.398097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.398227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.398255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.398407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.398528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.398554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.398734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.398855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.398881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.399023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.399330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.399616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.399874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.399979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.400151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.400370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.400653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.400810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.400975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.401245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.401533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.739 qpair failed and we were unable to recover it. 00:33:31.739 [2024-05-15 02:01:55.401847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.739 [2024-05-15 02:01:55.401992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.402132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.402305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.402331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.402502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.402648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.402674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.402770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.402886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.402912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.403002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.403127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.403152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.403309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.403480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.403505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.403651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.403796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.403842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.403956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.404230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.404528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.404782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.404933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.405048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.405145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.405188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.405374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.405480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.405505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.405607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.405721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.405747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.405840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.406185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.406465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.406787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.406956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.407104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.407199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.407231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.407366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.407499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.407527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.407660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.407792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.407821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.407985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.408249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.408521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.408829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.408977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.409143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.409490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.409770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.409955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.410129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.410248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.410275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.740 qpair failed and we were unable to recover it. 00:33:31.740 [2024-05-15 02:01:55.410373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.410492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.740 [2024-05-15 02:01:55.410517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.410665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.410807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.410849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.411007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.411255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.411550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.411825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.411998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.412108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.412223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.412253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.412364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.412464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.412490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.412634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.412807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.412833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.412957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.413244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.413506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.413840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.413956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.414051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.414325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.414597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.414882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.414978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.415153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.415403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.415707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.415835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.416006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.416252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.416586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.416852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.416987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.417082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.417195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.417228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.417353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.417448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.417474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.417594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.417733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.417762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.417905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.418055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.418082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.418207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.418312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.418339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.741 qpair failed and we were unable to recover it. 00:33:31.741 [2024-05-15 02:01:55.418453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.741 [2024-05-15 02:01:55.418568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.418596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.418718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.418818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.418844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.418961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.419108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.419133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.419310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.419412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.419440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.419613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.419703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.419729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.419891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.420157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.420367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.420629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.420874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.420987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.421105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.421410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.421626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.421799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.421930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.422142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.422409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.422671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.422793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.422951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.423238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.423507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.423765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.423886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.423987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.424232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.424517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.424733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.424879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.424997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.425278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.425549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.425778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.425933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.742 qpair failed and we were unable to recover it. 00:33:31.742 [2024-05-15 02:01:55.426059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.742 [2024-05-15 02:01:55.426171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.426323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.426594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.426844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.426962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.427060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.427184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.427209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.427357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.427530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.427555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.427655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.427770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.427795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.427917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.428152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.428390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.428595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.428844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.428981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.429145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.429268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.429294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.429439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.429545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.429573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.429688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.429806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.429831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.429926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.430154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.430370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.430691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.430865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.431016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.431335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.431589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.431825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.431952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.432086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.432398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.432605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.432866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.432985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.433013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.433130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.433254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.433280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.433431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.433540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.433568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.743 qpair failed and we were unable to recover it. 00:33:31.743 [2024-05-15 02:01:55.433679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.743 [2024-05-15 02:01:55.433792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.433820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.433966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.434239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.434552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.434785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.434909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.435044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.435364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.435620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.435887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.435983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.436105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.436363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.436619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.436826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.436968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.437096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.437190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.437233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.437381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.437489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.437519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.437649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.437777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.437805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.437921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.438141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.438360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.438685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.438829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.438956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.439210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.439485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.439711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.439908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.440039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.440161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.440189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.744 qpair failed and we were unable to recover it. 00:33:31.744 [2024-05-15 02:01:55.440345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.744 [2024-05-15 02:01:55.440445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.440470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.440563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.440650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.440675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.440778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.440898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.440941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.441059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.441288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.441537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.441847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.441996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.442103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.442354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.442602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.442819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.442939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.443041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.443263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.443579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.443825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.443987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.444107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.444198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.444228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.444354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.444475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.444507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.444674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.444764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.444790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.444891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.445166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.445465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.445704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.445826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.445963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.446245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.446508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.446765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.446882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.447026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.447140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.447173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.447298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.447441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.447467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.447595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.447759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.447784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.745 qpair failed and we were unable to recover it. 00:33:31.745 [2024-05-15 02:01:55.447922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.448036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.745 [2024-05-15 02:01:55.448064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.448195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.448319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.448344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.448477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.448586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.448628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.448733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.448826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.448866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.449035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.449268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.449492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.449716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.449833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.449932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.450177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.450424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.450665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.450867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.450998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.451142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.451371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.451652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.451798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.451918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.452150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.452457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.452734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.452854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.453011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.453320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.453549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.453796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.453957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.454097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.454327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.454582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.454846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.454989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.455106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.455248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.455275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.746 [2024-05-15 02:01:55.455420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.455564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.746 [2024-05-15 02:01:55.455589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.746 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.455680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.455772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.455797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.455919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.456164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.456441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.456687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.456814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.456942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.457210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.457494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.457858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.457973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.458096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.458387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.458631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.458867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.458995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.459115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.459238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.459265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.459409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.459508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.459536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.459711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.459806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.459831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.459957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.460265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.460515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.460769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.460879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.461020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.461285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.461563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.461823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.461981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.462120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.462227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.462270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.462367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.462460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.462487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.462603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.462753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.462782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.462895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.463075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.463100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.747 [2024-05-15 02:01:55.463208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.463330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.747 [2024-05-15 02:01:55.463355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.747 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.463482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.463646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.463671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.463790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.463891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.463916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.464037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.464301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.464583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.464839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.464989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.465134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.465273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.465302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.465411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.465581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.465605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.465725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.465820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.465844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.465971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.466227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.466533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.466825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.466954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.467080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.467227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.467255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.467374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.467475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.467507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.467650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.467789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.467818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.467918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.468175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.468473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.468736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.468895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.469016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.469338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.469611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.469872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.469986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.470011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.748 qpair failed and we were unable to recover it. 00:33:31.748 [2024-05-15 02:01:55.470135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.748 [2024-05-15 02:01:55.470238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.470379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.470616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.470831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.470982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.471083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.471348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.471621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.471858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.471993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.472134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.472457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.472751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.472898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.473018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.473252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.473525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.473793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.473915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.474039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.474210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.474245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.474351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.474490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.474516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.474616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.474728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.474753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.474873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.475214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.475528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.475776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.475931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.476071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.476339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.476584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.476846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.476978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.477113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.477355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.477596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.749 qpair failed and we were unable to recover it. 00:33:31.749 [2024-05-15 02:01:55.477860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.477984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.749 [2024-05-15 02:01:55.478009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.478155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.478279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.478307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.478437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.478576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.478618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.478744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.478860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.478885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.479058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.479337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.479584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.479858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.479998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.480127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.480228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.480258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.480366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.480479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.480504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.480602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.480723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.480751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.480892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.481131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.481429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.481759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.481904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.482074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.482348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.482568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.482866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.482979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.483134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.483418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.483708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.483899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.484049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.484265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.484507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.484794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.484946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.485092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.485223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.485255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.485403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.485508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.485534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.485686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.485808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.485834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.750 [2024-05-15 02:01:55.485999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.486142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.750 [2024-05-15 02:01:55.486168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.750 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.486319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.486423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.486449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.486570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.486734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.486760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.486846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.486955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.486981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.487104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.487202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.487238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.487336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.487434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.487459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.487610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.487747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.487790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.487919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.488232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.488535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.488769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.488915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.489024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.489158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.489187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.489316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.489438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.489465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.489609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.489720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.489745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.489920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.490206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.490506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.490799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.490990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.491155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.491282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.491311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.491427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.491578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.491603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.491698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.491855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.491883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.492055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.492223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.492252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.492367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.492493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.492529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.492699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.492871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.492897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.493018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.493314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.493579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.493857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.493979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.494004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.494130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.494255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.494281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.494380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.494479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.751 [2024-05-15 02:01:55.494504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.751 qpair failed and we were unable to recover it. 00:33:31.751 [2024-05-15 02:01:55.494606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.494754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.494780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.494881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.494991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.495142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.495449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.495729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.495846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.496012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.496143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.496172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.496298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.496443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.496467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.496633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.496780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.496805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.496956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.497131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.497157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.497266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.497409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.497437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.497625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.497739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.497780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.497912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.498193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.498538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.498781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.498971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.499133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.499305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.499335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.499457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.499588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.499613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.499747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.499861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.499890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.500030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.500187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.500223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.500337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.500460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.500486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.500590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.500733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.500762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.500923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.501197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.501467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.501737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.501893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.502037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.502194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.502260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.502429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.502539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.502564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.502658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.502770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.502795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.502912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.503001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.752 [2024-05-15 02:01:55.503026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.752 qpair failed and we were unable to recover it. 00:33:31.752 [2024-05-15 02:01:55.503150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.503314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.503343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.503475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.503615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.503640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.503791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.503883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.503908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.504033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.504303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.504577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.504821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.504961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.505063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.505276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.505606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.505861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.505990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.506125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.506224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.506251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.506352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.506469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.506496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.506643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.506763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.506789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.506917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.507033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.507059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.507238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.507367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.507395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.507556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.507690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.507717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.507883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.508141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.508496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.508813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.508955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.509079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.509197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.509244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.509407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.509566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.509595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.509711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.509830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.509856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.509956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.510072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.510098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.753 qpair failed and we were unable to recover it. 00:33:31.753 [2024-05-15 02:01:55.510243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.510332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.753 [2024-05-15 02:01:55.510359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.510479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.510627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.510669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.510794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.510946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.510975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.511132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.511269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.511297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.511438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.511531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.511557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.511695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.511831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.511859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.511967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.512242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.512508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.512857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.512993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.513180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.513460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.513772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.513961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.514120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.514246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.514272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.514416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.514583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.514608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.514740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.514878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.514907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.515080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.515205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.515238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.515392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.515560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.515589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.515723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.515849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.515877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.516025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.516321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.516603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.516865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.516984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.517131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.517396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.517692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.517843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.518019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.518146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.518172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.518370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.518498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.518537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.518689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.518836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.518861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.518986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.519081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.519110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.519271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.519427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.519455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.754 qpair failed and we were unable to recover it. 00:33:31.754 [2024-05-15 02:01:55.519626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.754 [2024-05-15 02:01:55.519748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.519772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.519873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.519989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.520112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.520376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.520691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.520826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.520974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.521249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.521507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.521780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.521911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.522060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.522320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.522564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.522808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.522981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.523176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.523291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.523317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.523442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.523531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.523557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.523677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.523796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.523823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.523970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.524230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.524539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.524852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.524985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.525117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.525465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.525758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.525923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.526092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.526227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.526258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.526384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.526475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.526502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.526639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.526795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.526830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.526990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.527274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.527516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.527775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.527934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.528048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.528139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.528165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.528258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.528410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.755 [2024-05-15 02:01:55.528440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.755 qpair failed and we were unable to recover it. 00:33:31.755 [2024-05-15 02:01:55.528550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.528706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.528735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.528883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.528998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.529133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.529454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.529780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.529930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.530080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.530239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.530269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.530373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.530544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.530569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.530669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.530783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.530809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.530911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.531180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.531447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.531762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.531933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.532067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.532208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.532240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.532367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.532514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.532555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.532713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.532842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.532871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.532975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.533247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.533589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.533857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.533989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.534188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.534511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.534791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.534971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.535118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.535380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.535643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.535876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.535997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.536166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.536296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.536324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.536448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.536582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.536609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.536748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.536874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.536898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.537021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.537182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.537209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.537321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.537482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.537511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.537670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.537791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.537834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.756 qpair failed and we were unable to recover it. 00:33:31.756 [2024-05-15 02:01:55.538018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.538139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.756 [2024-05-15 02:01:55.538164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.538298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.538393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.538435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.538524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.538666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.538691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.538860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.538978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.539136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.539453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.539717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.539913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.540036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.540160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.540184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.540290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.540414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.540439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.540602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.540718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.540746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.540884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.541211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.541476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.541750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.541916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.542060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.542205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.542245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.542352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.542469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.542494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.542617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.542708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.542752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.542899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.543048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.543073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.543203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.543353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.543380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.543537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.543726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.543782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.543911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.544172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.544512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.544771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.544945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.545120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.545282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.545309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.545410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.545522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.545551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.545695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.545836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.545860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.546011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.546120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.546148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.546291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.546381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.546407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.757 [2024-05-15 02:01:55.546528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.546650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.757 [2024-05-15 02:01:55.546675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.757 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.546853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.546950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.546976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.547114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.547229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.547258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.547381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.547528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.547553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.547673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.547799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.547827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.547934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.548210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.548541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.548858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.548995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.549143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.549392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.549735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.549893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.550054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.550169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.550193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.550346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.550492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.550520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.550655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.550789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.550818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.550940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.551225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.551474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.551770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.551958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.552090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.552200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.552233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.552380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.552524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.552550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.552676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.552794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.552820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.552919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.553228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.553557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.553822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.553974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.554142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.554268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.554294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.554413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.554498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.554524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.554670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.554848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.554874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.554975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.555121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.555145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.555290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.555388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.555413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.555554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.555713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.555741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.555873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.556005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.556034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.556182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.556305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.556331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.758 qpair failed and we were unable to recover it. 00:33:31.758 [2024-05-15 02:01:55.556453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.758 [2024-05-15 02:01:55.556580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.556607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.556766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.556870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.556898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.557052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.557319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.557598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.557868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.557993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.558134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.558441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.558766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.558912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.559052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.559225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.559251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.559353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.559468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.559493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.559611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.559753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.559779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.559901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.560196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.560496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.560757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.560939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.561068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.561236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.561262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.561416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.561585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.561634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.561761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.561915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.561943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.562073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.562172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.562199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.562325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.562418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.562444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.562589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.562717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.562760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.562905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.563221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.563531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.563794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.563952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.564098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.564184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.564209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.564341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.564441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.564467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.564592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.564749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.564776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.564943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.565179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.565517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.565777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.565944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.566085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.566244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.759 [2024-05-15 02:01:55.566270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.759 qpair failed and we were unable to recover it. 00:33:31.759 [2024-05-15 02:01:55.566391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.566549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.566577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.566746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.566868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.566894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.567045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.567177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.567202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.567369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.567473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.567503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.567626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.567782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.567808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.567993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.568117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.568143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.568309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.568518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.568547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.568689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.568815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.568841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.568967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.569300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.569567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.569816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.569956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.570103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.570237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.570264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.570388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.570514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.570540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.570677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.570853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.570879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.570972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.571292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.571586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.571831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.571991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.572153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.572422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.572716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.572851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.572969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.573291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.573550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.573771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.573924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.574070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.574283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.574589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.574833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.574973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.575095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.575222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.575249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.575344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.575469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.575494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.575629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.575802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.575828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.760 [2024-05-15 02:01:55.575967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.576108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.760 [2024-05-15 02:01:55.576137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.760 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.576273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.576395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.576421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.576577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.576719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.576744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.576877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.577180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.577466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.577765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.577931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.578081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.578333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.578591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.578846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.578975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.579079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.579197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.579228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.579359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.579502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.579529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.579663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.579804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.579829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.579970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.580226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.580492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.580805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.580979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.581105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.581252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.581281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.581396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.581494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.581519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.581638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.581728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.581776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.581885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.582221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.582469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.582676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.582872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.583036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.583148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.583173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.583324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.583428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.583456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.583614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.583747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.583775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.583907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.584146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.584395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.584722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.584909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.761 qpair failed and we were unable to recover it. 00:33:31.761 [2024-05-15 02:01:55.585041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.761 [2024-05-15 02:01:55.585162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.585189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.585348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.585439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.585464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.585608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.585726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.585753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.585894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.586060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.586086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.586212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.586403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.586430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.586590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.586735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.586778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.586943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.587243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.587539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.587876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.587997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.588120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.588439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.588721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.588874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.588999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.589255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.589503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.589806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.589956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.590101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.590257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.590301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.590463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.590618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.590647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.590780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.590912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.590940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.591058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.591174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.591200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.591389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.591506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.591531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.591647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.591774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.591803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.591916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.592162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.592457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.592735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.592914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.593065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.593196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.593229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.593366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.593498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.593527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.593690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.593836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.593861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.594000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.594112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.594137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.594263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.594359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.594400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.594518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.594667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.594692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.594861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.595158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.595526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.595818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.762 [2024-05-15 02:01:55.595976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.762 qpair failed and we were unable to recover it. 00:33:31.762 [2024-05-15 02:01:55.596088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.596187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.596222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.596370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.596490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.596516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.596669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.596773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.596801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.596965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.597255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.597506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.597772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.597944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.598113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.598245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.598270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.598392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.598542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.598567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.598710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.598847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.598874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.599018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.599135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.599160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.599278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.599426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.599455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.599593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.599726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.599768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.599891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.600183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.600482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.600757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.600878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.600980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.601264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.601576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.601831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.601989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.602102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.602235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.602265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.602440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.602555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.602581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.602683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.602849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.602878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.603039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.603165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.603192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.603345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.603447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.603473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.603582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.603712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.603740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.603879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.604047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.604072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.604197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.604299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.604324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.604485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.604676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.604748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.604880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.605211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.605498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.605808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.605968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.606112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.606298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.606434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.606592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.763 [2024-05-15 02:01:55.606620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.763 qpair failed and we were unable to recover it. 00:33:31.763 [2024-05-15 02:01:55.606754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.606925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.606950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.607100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.607322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.607621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.607863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.607976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.608201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.608500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.608802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.608977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.609152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.609291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.609320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.609487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.609618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.609645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.609807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.609971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.609999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.610173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.610299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.610325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.610449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.610587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.610616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.610785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.610885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.610910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.611036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.611132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.611158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.611287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.611411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.611439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.611588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.611735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.611760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.611906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.612181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.612455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.612790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.612973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.613113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.613233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.613258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.613380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.613531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.613556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.613679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.613839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.613868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.614040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.614185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.614211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.614337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.614462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.614488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.614634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.614798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.614823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.614932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.615172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.615412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.615617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.615756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.615874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.616004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.616030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.616155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.616276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.616302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.764 qpair failed and we were unable to recover it. 00:33:31.764 [2024-05-15 02:01:55.616447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.616569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.764 [2024-05-15 02:01:55.616594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.616716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.616869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.616894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.617010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.617164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.617191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.617341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.617488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.617517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.617638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.617784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.617809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.617983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.618299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.618540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.618791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.618930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.619062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.619198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.619242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.619408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.619520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.619545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.619667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.619829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.619856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.619986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.620278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.620527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.620748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.620939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.621048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.621326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.621593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.621893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.621990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.622162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.622463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.622799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.622972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.623146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.623301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.623334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.623466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.623641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.623666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.623799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.623934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.623959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.624069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.624242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.624268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.624395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.624534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.624563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.624702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.624822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.624848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.624989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.625131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.625160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.625304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.625421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.625446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.625577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.625669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.625694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.625855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.626148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.626486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.626765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.626904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.627056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.627206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.627252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.627376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.627476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.627501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.627649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.627779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.765 [2024-05-15 02:01:55.627808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.765 qpair failed and we were unable to recover it. 00:33:31.765 [2024-05-15 02:01:55.627948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.628209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.628534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.628848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.628967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.629114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.629204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.629236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.629360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.629526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.629552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.629666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.629770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.629796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.629937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.630180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.630494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.630711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.630907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.631049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.631190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.631238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.631400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.631518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.631543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.631688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.631820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.631845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.632018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.632325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.632598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.632844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.632983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.633145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.633441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.633772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.633961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.634097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.634232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.634262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.634408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.634526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.634551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.634672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.634839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.634867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.635024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.635155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.635183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.635354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.635487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.635529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.635638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.635800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.635826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.635954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.636235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.636457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.636669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.636790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.636909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.637131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.637461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.637756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.637898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.638051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.638348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.638558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.766 qpair failed and we were unable to recover it. 00:33:31.766 [2024-05-15 02:01:55.638862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.766 [2024-05-15 02:01:55.638981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.767 qpair failed and we were unable to recover it. 00:33:31.767 [2024-05-15 02:01:55.639128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.767 [2024-05-15 02:01:55.639266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.767 [2024-05-15 02:01:55.639308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.767 qpair failed and we were unable to recover it. 00:33:31.767 [2024-05-15 02:01:55.639408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.767 [2024-05-15 02:01:55.639498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.767 [2024-05-15 02:01:55.639523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:31.767 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.639638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.639750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.639775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.639906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.640155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.640368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.640608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.640871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.640990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.641156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.641455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.641803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.641928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.642050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.642195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.642229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.642365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.642470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.642497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.642645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.642749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.642775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.642944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.643116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.643142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.047 [2024-05-15 02:01:55.643263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.643376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.047 [2024-05-15 02:01:55.643402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.047 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.643507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.643629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.643656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.643800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.643930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.643959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.644119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.644243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.644286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.644386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.644531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.644557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.644731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.644863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.644891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.645028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.645153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.645180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.645332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.645453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.645478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.645587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.645741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.645766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.645940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.646251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.646610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.646873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.646989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.647108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.647390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.647664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.647805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.647949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.648227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.648491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.648822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.648966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.649118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.649227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.649256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.649404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.649572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.649598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.649745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.649864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.649889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.650012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.650148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.650176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.650349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.650478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.650503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.650655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.650759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.650783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.650898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.651243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.651526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.651779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.651901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.048 qpair failed and we were unable to recover it. 00:33:32.048 [2024-05-15 02:01:55.652018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.048 [2024-05-15 02:01:55.652127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.652154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.652329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.652456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.652481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.652585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.652746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.652774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.652953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.653095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.653121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.653255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.653399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.653441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.653603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.653733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.653761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.653896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.654228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.654515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.654802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.654956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.655076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.655198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.655237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.655387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.655508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.655536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.655665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.655838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.655862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.655983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.656240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.656542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.656791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.656937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.657062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.657210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.657241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.657365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.657537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.657566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.657711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.657809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.657835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.657975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.658276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.658552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.658790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.658988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.659145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.659247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.659275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.659395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.659523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.659549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.659690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.659846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.659874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.659980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.660110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.660139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.660314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.660436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.660463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.049 [2024-05-15 02:01:55.660600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.660769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.049 [2024-05-15 02:01:55.660797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.049 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.660927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.661090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.661118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.661282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.661405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.661431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.661599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.661756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.661784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.661919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.662205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.662480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.662759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.662913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.663013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.663138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.663164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.663284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.663427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.663456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.663595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.663758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.663786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.663916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.664158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.664436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.664693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.664840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.664939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.665202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.665447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.665763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.665895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.666035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.666309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.666514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.666749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.666882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.667019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.667285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.667559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.667839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.667961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.668072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.668195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.668238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.668347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.668490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.668515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.050 [2024-05-15 02:01:55.668647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.668766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.050 [2024-05-15 02:01:55.668792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.050 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.668893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.669185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.669409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.669675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.669843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.669962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.670209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.670517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.670810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.670977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.671105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.671229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.671256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.671378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.671486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.671515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.671620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.671753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.671781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.671926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.672168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.672509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.672756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.672902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.673008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.673286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.673495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.673701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.673821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.673959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.674256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.674489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.674787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.674938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.675062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.675178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.675204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.675331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.675439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.675469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.051 qpair failed and we were unable to recover it. 00:33:32.051 [2024-05-15 02:01:55.675577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.051 [2024-05-15 02:01:55.675712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.675742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.675867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.675962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.675988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.676109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.676239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.676269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.676403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.676542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.676571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.676748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.676865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.676891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.677013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.677295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.677513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.677814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.677978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.678111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.678362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.678618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.678886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.678998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.679152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.679404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.679656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.679898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.679993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.680135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.680408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.680710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.680823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.680924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.681194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.681452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.681708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.681851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.681959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.682089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.682119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.682266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.682397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.682422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.052 [2024-05-15 02:01:55.682540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.682684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.052 [2024-05-15 02:01:55.682709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.052 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.682835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.682956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.682981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.683106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.683230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.683256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.683406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.683536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.683563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.683696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.683833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.683857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.683958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.684241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.684522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.684791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.684962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.685061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.685177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.685202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.685346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.685482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.685510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.685618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.685739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.685764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.685891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.686198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.686487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.686770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.686897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.686989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.687312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.687556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.687772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.687915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.688041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.688191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.688240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.688397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.688543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.688569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.688688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.688793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.688821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.688931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.689227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.689538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.689814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.689939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.690038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.690134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.690160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.053 qpair failed and we were unable to recover it. 00:33:32.053 [2024-05-15 02:01:55.690263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.053 [2024-05-15 02:01:55.690432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.690460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.690607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.690731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.690757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.690876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.691191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.691497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.691750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.691923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.692036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.692170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.692196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.692304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.692408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.692434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.692574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.692703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.692732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.692895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.693185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.693414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.693624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.693840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.693960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.694063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.694285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.694532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.694800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.694928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.695077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.695359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.695571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.695832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.695997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.696171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.696297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.696322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.696443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.696591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.696619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.696738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.696858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.696886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.697070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.697190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.697222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.697329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.697456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.697484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.697617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.697754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.697783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.054 [2024-05-15 02:01:55.697922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.698041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.054 [2024-05-15 02:01:55.698066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.054 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.698222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.698384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.698409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.698526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.698647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.698674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.698792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.698900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.698924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.699020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.699134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.699163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.699333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.699458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.699483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.699604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.699742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.699768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.699940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.700214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.700508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.700729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.700870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.700988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.701253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.701499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.701843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.701998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.702135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.702236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.702261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.702406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.702560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.702585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.702728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.702858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.702886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.703030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.703309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.703578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.703876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.703976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.704105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.704405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.704735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.704859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.705022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.705331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.705609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.705825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.705993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.055 qpair failed and we were unable to recover it. 00:33:32.055 [2024-05-15 02:01:55.706120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.055 [2024-05-15 02:01:55.706231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.706259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.706437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.706540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.706565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.706680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.706811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.706839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.706993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.707263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.707534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.707843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.707997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.708123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.708274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.708304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.708419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.708545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.708569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.708826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.708854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.709021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.709263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.709509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.709805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.709933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.710038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.710202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.710236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.710407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.710532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.710558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.710665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.710762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.710788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.710909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.711185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.711496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.711720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.711908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.712043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.712177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.712206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.712335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.712459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.712485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.712607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.712740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.712769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.056 qpair failed and we were unable to recover it. 00:33:32.056 [2024-05-15 02:01:55.712877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.056 [2024-05-15 02:01:55.713008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.713184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.713429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.713720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.713916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.714037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.714295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.714572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.714866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.714988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.715119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.715368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.715656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.715878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.715976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.716104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.716345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.716617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.716774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.716885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.717170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.717389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.717665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.717850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.717964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.718264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.718536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.718808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.718956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.719118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.719334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.719548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.719847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.719989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.720016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.720115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.720259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.720287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.057 [2024-05-15 02:01:55.720427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.720534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.057 [2024-05-15 02:01:55.720560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.057 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.720682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.720830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.720856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.720955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.721239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.721501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.721762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.721894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.722029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.722177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.722202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.722331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.722461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.722489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.722625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.722756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.722784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.722927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.723166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.723475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.723760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.723908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.724029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.724308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.724552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.724819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.724978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.725112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.725255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.725283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.725403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.725521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.725546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.725669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.725768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.725809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.725925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.726233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.726509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.726818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.726982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.727133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.727392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.727650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.727789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.727922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.728055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.728080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.728202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.728348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.728376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.058 [2024-05-15 02:01:55.728532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.728668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.058 [2024-05-15 02:01:55.728695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.058 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.728828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.728975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.729151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.729418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.729762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.729885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.729974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.730186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.730436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.730699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.730874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.730993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.731311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.731548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.731822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.731960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.732094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.732186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.732212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.732336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.732468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.732497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.732630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.732757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.732782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.732885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.733131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.733392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.733653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.733905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.733993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.734129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.734461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.734730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.734866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.734960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.735233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.735456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.735741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.735882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.735978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.736100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.736127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.736255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.736383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.059 [2024-05-15 02:01:55.736412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.059 qpair failed and we were unable to recover it. 00:33:32.059 [2024-05-15 02:01:55.736542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.736639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.736668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.736801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.736892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.736917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.737039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.737312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.737602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.737846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.737980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.738137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.738422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.738776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.738895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.739013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.739243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.739493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.739735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.739933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.740057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.740283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.740519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.740724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.740843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.740958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.741226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.741475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.741718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.741871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.741967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.742254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.742492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.742780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.742916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.743030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.743279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.743540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.743833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.743947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.060 qpair failed and we were unable to recover it. 00:33:32.060 [2024-05-15 02:01:55.744064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.060 [2024-05-15 02:01:55.744163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.744197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.744357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.744490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.744545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.744664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.744758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.744784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.744880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.744997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.745178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.745483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.745790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.745943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.746051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.746337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.746599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.746881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.746985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.747144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.747471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.747714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.747856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.747954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.748227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.748540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.748801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.748944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.749082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.749199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.749243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.749430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.749538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.749565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.749692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.749786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.749812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.749944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.750245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.750505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.750829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.750960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.061 qpair failed and we were unable to recover it. 00:33:32.061 [2024-05-15 02:01:55.751094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.061 [2024-05-15 02:01:55.751250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.751279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.751398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.751520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.751544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.751663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.751766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.751793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.751953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.752291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.752510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.752794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.752937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.753058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.753349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.753626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.753836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.753956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.754100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.754242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.754269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.754416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.754556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.754585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.754701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.754823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.754852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.754991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.755265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.755506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.755785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.755927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.756051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.756189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.756237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.756352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.756448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.756474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.756622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.756733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.756761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.756931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.757178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.757481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.757769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.757892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.757991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.758239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.758524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.758769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.062 [2024-05-15 02:01:55.758945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.062 qpair failed and we were unable to recover it. 00:33:32.062 [2024-05-15 02:01:55.759042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.759286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.759600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.759865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.759996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.760127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.760273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.760298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.760410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.760542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.760567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.760686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.760825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.760853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.760959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.761218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.761449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.761701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.761846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.761969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.762241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.762503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.762777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.762903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.763050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.763191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.763225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.763355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.763468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.763508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.763674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.763767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.763793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.763928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.764165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.764454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.764706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.764883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.765033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.765132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.765157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.765311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.765407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.765432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.765581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.765738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.765766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.765919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.766143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.766399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.766689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.766826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.063 qpair failed and we were unable to recover it. 00:33:32.063 [2024-05-15 02:01:55.766965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.063 [2024-05-15 02:01:55.767082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.767233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.767530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.767764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.767937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.768085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.768386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.768627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.768849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.768995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.769113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.769252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.769281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.769426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.769534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.769559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.769687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.769820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.769847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.769948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.770196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.770441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.770764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.770903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.771042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.771162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.771188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.771377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.771510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.771550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.771675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.771791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.771817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.771926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.772191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.772477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.772786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.772938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.773052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.773166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.773195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.773310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.773454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.773482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.773625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.773770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.773797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.773908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.774209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.774478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.774719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.064 [2024-05-15 02:01:55.774907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.064 qpair failed and we were unable to recover it. 00:33:32.064 [2024-05-15 02:01:55.775052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.775195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.775231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.775378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.775477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.775503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.775600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.775724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.775750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.775892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.776205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.776434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.776682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.776798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.776941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.777208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.777459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.777731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.777855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.777963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.778198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.778416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.778663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.778905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.778994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.779143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.779408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.779692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.779810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.779915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.780165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.780461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.780709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.780858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.780969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.781069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.781095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.781221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.781340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.065 [2024-05-15 02:01:55.781366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.065 qpair failed and we were unable to recover it. 00:33:32.065 [2024-05-15 02:01:55.781484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.781603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.781631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.781785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.781911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.781936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.782031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.782301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.782553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.782793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.782938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.783056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.783180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.783206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.783362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.783513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.783539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.783666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.783811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.783838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.783940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.784178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.784395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.784641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.784867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.784986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.785111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.785231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.785257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.785361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.785502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.785527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.785622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.785764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.785789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.785895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.786207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.786469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.786714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.786864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.787010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.787289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.787577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.787844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.787996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.788124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.788252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.788277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.788392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.788546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.788590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.788711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.788838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.788864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.788990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.789082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.789107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.066 [2024-05-15 02:01:55.789230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.789326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.066 [2024-05-15 02:01:55.789353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.066 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.789452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.789601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.789627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.789749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.789866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.789892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.790017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.790284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.790503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.790749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.790900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.791047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.791330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.791613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.791889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.791976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.792104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.792389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.792748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.792901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.793024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.793294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.793616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.793869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.793998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.794119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.794391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.794614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.794755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.794877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.795128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.795407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.795691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.795842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.795967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.796206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.796467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.796762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.796907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.797054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.797183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.797210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.067 qpair failed and we were unable to recover it. 00:33:32.067 [2024-05-15 02:01:55.797334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.067 [2024-05-15 02:01:55.797479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.797505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.797625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.797747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.797772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.797871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.797994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.798147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.798456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.798721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.798876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.798979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.799073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.799099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.799237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.799385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.799428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.799572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.799715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.799741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.799897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.800174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.800521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.800794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.800942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.801042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.801141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.801171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.801283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.801433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.801459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.801615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.801768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.801806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.801916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.802044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.802069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.802199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.802321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.802350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.802515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.802694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.802757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.802852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.803180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.803492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.803772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.803922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.804044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.804161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.804187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.804351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.804451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.804478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.804630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.804768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.804810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.804932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.805196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.805510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.068 [2024-05-15 02:01:55.805819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.068 [2024-05-15 02:01:55.805940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.068 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.806069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.806193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.806226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.806328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.806454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.806480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.806583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.806733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.806759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.806881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.807187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.807474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.807721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.807898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.808014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.808141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.808166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.808263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.808389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.808433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.808583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.808801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.808860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.808996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.809146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.809172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.809305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.809465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.809494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.809690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.809832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.809858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.809990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.810240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.810449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.810707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.810853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.810950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.811220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.811441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.811718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.811828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.811951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.812254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.812470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.812700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.812878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.812979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.813343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.813603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.813847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.813968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.069 qpair failed and we were unable to recover it. 00:33:32.069 [2024-05-15 02:01:55.814083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.069 [2024-05-15 02:01:55.814213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.814245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.814400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.814549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.814575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.814701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.814856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.814882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.815000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.815305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.815560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.815837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.815986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.816082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.816183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.816226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.816346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.816522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.816550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.816710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.816861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.816897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.817013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.817239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.817467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.817742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.817866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.817989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.818134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.818160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.818295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.818426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.818454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.818602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.818759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.818785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.818936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.819198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.819474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.819792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.819964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.820092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.820222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.820249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.820403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.820495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.820529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.820662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.820782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.820809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.820905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.821053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.821079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.070 qpair failed and we were unable to recover it. 00:33:32.070 [2024-05-15 02:01:55.821185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.821309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.070 [2024-05-15 02:01:55.821336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.821458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.821583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.821608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.821739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.821835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.821861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.822006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.822263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.822601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.822820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.822969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.823067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.823360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.823600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.823857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.823981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.824100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.824397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.824689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.824839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.825614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.825776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.825804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.825963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.826238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.826564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.826867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.826993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.827122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.827287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.827334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.827427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.827531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.827557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.827681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.827812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.827839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.827964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.828242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.828516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.828795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.828945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.829032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.829315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.829620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.829875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.829999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.071 [2024-05-15 02:01:55.830025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.071 qpair failed and we were unable to recover it. 00:33:32.071 [2024-05-15 02:01:55.830123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.830277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.830307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.830467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.830618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.830646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.830822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.830958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.830987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.831130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.831283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.831310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.831407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.831534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.831560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.831646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.831810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.831839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.832022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.832183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.832211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.832410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.832562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.832602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.832716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.832853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.832882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.833012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.833143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.833172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.833302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.833421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.833452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.833560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.833731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.833760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.833943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.834265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.834546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.834841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.834976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.835137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.835421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.835641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.835811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.835949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.836265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.836562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.836850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.836993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.837087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.837180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.837205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.837339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.837465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.837492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.837610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.837793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.837821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.837929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.838198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.838497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.838839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.072 [2024-05-15 02:01:55.838976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.072 qpair failed and we were unable to recover it. 00:33:32.072 [2024-05-15 02:01:55.839095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.839213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.839254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.839408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.839517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.839545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.839675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.839795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.839823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.839953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.840239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.840524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.840801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.840973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.841090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.841171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.841197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.841335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.841462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.841503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.841651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.842482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.842540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.842747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.842895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.842924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.843063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.843207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.843248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.843380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.843478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.843503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.843624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.843732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.843759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.843913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.844212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.844484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.844841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.844992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.845090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.845183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.845229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.845339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.845491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.845525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.845655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.845806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.845833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.846015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.846194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.846248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.846725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.846918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.846953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.847081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.847232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.847260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.847352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.847446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.847472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.847625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.847767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.847796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.847934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.848288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.848587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.073 qpair failed and we were unable to recover it. 00:33:32.073 [2024-05-15 02:01:55.848882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.848994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.073 [2024-05-15 02:01:55.849023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.849176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.849291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.849318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.849418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.849551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.849578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.849731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.849896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.849925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.850044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.850193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.850241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.850353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.850482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.850508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.850623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.850763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.850792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.850949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.851266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.851552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.851831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.851965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.852130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.852281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.852308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.852434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.852555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.852595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.852735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.852863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.852892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.853075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.853210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.853245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.853367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.853470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.853513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.853658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.853798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.853827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.854025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.854160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.854190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.854335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.854464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.854491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.854637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.854728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.854754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.854922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.855058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.855086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.855184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.855350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.855377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.855497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.855658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.855688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.855841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.856000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.856029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.856154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.856265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.856292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.856397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.856497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.074 [2024-05-15 02:01:55.856535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.074 qpair failed and we were unable to recover it. 00:33:32.074 [2024-05-15 02:01:55.856648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.856751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.856782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.856915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.857158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.857429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.857753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.857936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.858079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.858354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.858622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.858894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.858988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.859155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.859398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.859647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.859833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.859963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.860233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.860467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.860789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.860976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.861110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.861265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.861296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.861404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.861542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.861571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.861740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.861828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.861853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.862010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.862269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.862491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.862821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.862989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.863173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.863446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.863715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.863879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.863986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.864265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.864477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.075 qpair failed and we were unable to recover it. 00:33:32.075 [2024-05-15 02:01:55.864730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.075 [2024-05-15 02:01:55.864891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.865066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.865224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.865250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.865352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.865442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.865467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.865562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.865703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.865731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.865874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.866175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.866441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.866772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.866935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.867074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.867351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.867623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.867887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.867987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.868124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.868397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.868727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.868886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.869018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.869162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.869189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.869330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.869459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.869486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.869636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.869754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.869787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.869912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.870214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.870469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.870772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.870920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.871037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.871162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.871189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.871371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.871476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.871501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.871648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.871767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.871796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.871994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.872138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.872167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.872314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.872457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.872486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.872661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.872872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.872920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.873041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.873168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.873195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.873321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.873469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.076 [2024-05-15 02:01:55.873513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.076 qpair failed and we were unable to recover it. 00:33:32.076 [2024-05-15 02:01:55.873682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.873889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.873933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.874051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.874146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.874172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.874301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.874436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.874466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.874645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.874821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.874864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.874964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.875227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.875508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.875810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.875931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.876032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.876331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.876641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.876908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.876999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.877113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.877376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.877693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.877806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.877935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.878208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.878431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.878693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.878868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.878994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.879089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.879116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.879225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.879360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.879404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.879543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.879708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.879734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.879882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.880183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.880435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.880778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.880959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.881110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.881202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.881233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.881358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.881500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.881543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.881672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.881784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.881813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.077 qpair failed and we were unable to recover it. 00:33:32.077 [2024-05-15 02:01:55.881934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.077 [2024-05-15 02:01:55.882085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.882245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.882504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.882780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.882906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.883009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.883251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.883472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.883726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.883877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.884000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.884288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.884542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.884852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.884993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.885088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.885176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.885213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.885322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.885420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.885446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.885608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.885755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.885781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.885901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.886024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.886050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.886180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.886317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.886347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.886515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.886672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.886721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.886891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.887162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.887495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.887828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.887953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.888076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.888331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.888582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.888834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.888958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.889081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.889209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.889243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.889362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.889493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.889529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.889652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.889782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.078 [2024-05-15 02:01:55.889808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.078 qpair failed and we were unable to recover it. 00:33:32.078 [2024-05-15 02:01:55.889908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.890162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.890432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.890684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.890855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.890977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.891193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.891507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.891793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.891943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.892032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.892181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.892222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.892343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.892464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.892493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.892640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.892787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.892813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.892935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.893183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.893459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.893796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.893944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.894102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.894197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.894237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.894352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.894480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.894517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.894673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.894845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.894871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.894991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.895250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.895499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.895749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.895898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.896026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.896254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.896481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.079 qpair failed and we were unable to recover it. 00:33:32.079 [2024-05-15 02:01:55.896788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.079 [2024-05-15 02:01:55.896886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.896912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.897038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.897157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.897183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.897296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.897415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.897445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.897629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.897779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.897805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.897896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.898163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.898443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.898680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.898828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.898959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.899187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.899446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.899721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.899869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.899991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.900207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.900458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.900730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.900902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.901030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.901294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.901550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.901834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.901979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.902120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.902367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.902682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.902830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.902958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.903202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.903487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.903803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.903953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.904051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.904174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.904211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.904345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.904470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.904518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.904642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.904743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.904768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.904871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.905019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.905045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.080 qpair failed and we were unable to recover it. 00:33:32.080 [2024-05-15 02:01:55.905166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.905275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.080 [2024-05-15 02:01:55.905302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.905428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.905567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.905594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.905698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.905799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.905829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.905932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.906203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.906459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.906685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.906801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.906908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.907174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.907423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.907720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.907868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.907960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.908180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.908453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.908729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.908884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.908987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.909303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.909603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.909882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.909976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.910124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.910355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.910612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.910802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.910932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.911156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.911443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.911685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.911854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.911964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.912248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.912475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.912836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.912971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.913078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.913186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.913213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.913325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.913441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.913471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.081 qpair failed and we were unable to recover it. 00:33:32.081 [2024-05-15 02:01:55.913569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.081 [2024-05-15 02:01:55.914415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.914446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.914596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.914734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.914768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.914904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.915151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.915445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.915697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.915849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.915978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.916258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.916479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.916758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.916898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.917022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.917141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.917165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.917267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.917372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.917397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.917514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.917681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.917709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.917885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.918167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.918479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.918807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.918979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.919078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.919203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.919239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.919366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.919512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.919540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.919669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.919772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.919804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.919899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.920167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.920390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.920641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.920857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.920988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.921137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.921407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.921638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.921784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.921888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.922170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.922424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.922682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.082 [2024-05-15 02:01:55.922800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.082 qpair failed and we were unable to recover it. 00:33:32.082 [2024-05-15 02:01:55.922904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.923145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.923378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.923634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.923875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.923994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.924087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.924212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.924243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.924359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.924454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.924480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.924613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.924748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.924773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.924896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.925155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.925383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.925642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.925903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.925999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.926125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.926348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.926585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.926848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.926994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.927128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.927260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.927286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.927379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.927481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.927506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.927625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.927722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.927747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.927880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.928157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.928385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.928608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.928862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.928984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.929102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.929327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.929577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.929800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.929924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.083 qpair failed and we were unable to recover it. 00:33:32.083 [2024-05-15 02:01:55.930046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.083 [2024-05-15 02:01:55.930164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.930296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.930510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.930790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.930934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.931029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.931263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.931499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.931738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.931850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.931975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.932224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.932436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.932669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.932820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.932918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.933241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.933457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.933735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.933856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.934008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.934222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.934471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.934746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.934871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.934970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.935221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.935449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.935718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.935865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.936002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.936146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.936171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.936373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.936469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.936495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.936622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.936769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.936795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.936955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.937199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.937457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.937701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.937823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.937929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.938032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.938058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.938159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.938260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.938287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.084 qpair failed and we were unable to recover it. 00:33:32.084 [2024-05-15 02:01:55.938406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.084 [2024-05-15 02:01:55.938504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.938530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.938647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.938748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.938774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.938898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.938993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.939119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.939359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.939607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.939778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.939908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.940171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.940380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.940646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.940885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.940996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.941106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.941377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.941615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.941853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.941979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.942103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.942324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.942553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.942775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.942946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.943046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.943292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.943537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.943785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.943895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.944020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.944251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.944523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.944768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.944954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.945076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.945171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.945197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.945309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.945422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.945447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.945601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.945723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.945748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.945859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.946123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.946348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.085 qpair failed and we were unable to recover it. 00:33:32.085 [2024-05-15 02:01:55.946571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.085 [2024-05-15 02:01:55.946696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.946781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.946900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.946924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.947011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.947240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.947455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.947738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.947863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.947957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.948197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.948436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.948717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.948833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.948932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.949169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.949423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.949670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.949876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.949994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.950122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.950373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.950593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.950878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.950977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.951130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.951375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.951618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.951860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.951983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.952106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.952390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.952617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.952763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.952862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.953147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.953387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.953611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.953883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.953985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.954010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.954136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.954244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.954272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.954396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.954489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.954514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.086 qpair failed and we were unable to recover it. 00:33:32.086 [2024-05-15 02:01:55.954656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.086 [2024-05-15 02:01:55.954800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.954826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.087 qpair failed and we were unable to recover it. 00:33:32.087 [2024-05-15 02:01:55.954923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.955073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.955098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.087 qpair failed and we were unable to recover it. 00:33:32.087 [2024-05-15 02:01:55.955196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.955329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.955357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.087 qpair failed and we were unable to recover it. 00:33:32.087 [2024-05-15 02:01:55.955541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.955648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.087 [2024-05-15 02:01:55.955689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.087 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.955826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.955938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.955964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.956055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.956336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.956631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.956852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.956983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.957087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.957209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.957256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.957405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.957563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.957591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.957708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.957811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.957836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.957968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.958180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.958429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.958713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.958862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.958959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.959199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.959499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.959730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.959851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.960013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.960279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.960583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.370 qpair failed and we were unable to recover it. 00:33:32.370 [2024-05-15 02:01:55.960839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.370 [2024-05-15 02:01:55.960933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.960959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.961058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.961286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.961523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.961786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.961910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.962002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.962254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.962477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.962746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.962900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.962997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.963257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.963503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.963773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.963897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.964000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.964246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.964456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.964736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.964911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.965010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.965266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.965490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.965719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.965834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.965928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.966180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.966396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.966637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.966878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.966985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.967104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.967383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.371 qpair failed and we were unable to recover it. 00:33:32.371 [2024-05-15 02:01:55.967617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.371 [2024-05-15 02:01:55.967758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.967850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.967971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.967995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.968086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.968306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.968521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.968768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.968913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.969012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.969224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.969465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.969712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.969831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.969936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.970228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.970451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.970664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.970791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.970916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.971167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.971414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.971638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.971841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.971985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.972088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.972359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.972580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.972801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.972946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.973044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.973266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.973480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.973707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.973856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.973943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.974069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.372 [2024-05-15 02:01:55.974094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.372 qpair failed and we were unable to recover it. 00:33:32.372 [2024-05-15 02:01:55.974193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.974443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.974677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.974884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.974980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.975104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.975320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.975596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.975814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.975962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.976065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.976317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.976575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.976813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.976943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.977038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.977321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.977574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.977789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.977935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.978043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.978312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.978563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.978833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.978957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.979078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.979324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.979573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.979848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.979977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.980100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.980350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.980579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.373 [2024-05-15 02:01:55.980849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.373 [2024-05-15 02:01:55.980994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.373 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.981085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.981357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.981571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.981876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.981976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.982123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.982373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.982621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.982792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.982915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.983132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.983399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.983640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.983755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.983878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.984125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.984347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.984562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.984807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.984951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.985044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.985303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.985564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.985823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.985947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.986071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.986380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.986584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.986821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.986947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.374 qpair failed and we were unable to recover it. 00:33:32.374 [2024-05-15 02:01:55.987073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.374 [2024-05-15 02:01:55.987190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.987314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.987551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.987797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.987919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.988028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.988295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.988550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.988765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.988910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.989009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.989258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.989485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.989699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.989852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.989980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.990224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.990472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.990726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.990869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.990969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.991177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.991451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.991715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.991896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.992053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.992199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.992230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.992356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.992450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.992475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.992617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.992736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.992776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.992934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.993068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.993096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.993278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.993381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.375 [2024-05-15 02:01:55.993406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.375 qpair failed and we were unable to recover it. 00:33:32.375 [2024-05-15 02:01:55.993502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.993647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.993676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.993776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.993884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.993912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.994042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.994145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.994173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.994349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.994443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.994468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.994575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.994717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.994745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.994907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.995165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.995437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.995721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.995911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.996037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.996248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.996274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.996369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.996470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.996495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.996635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.996822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.996851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.996983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.997300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.997575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.997877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.997995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.998119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.998210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.998253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.998355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.998445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.998470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.998603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.998732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.998757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.998871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.999167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.999442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:55.999761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:55.999916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:56.000025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:56.000153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:56.000182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:56.000334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:56.000452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:56.000478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.376 qpair failed and we were unable to recover it. 00:33:32.376 [2024-05-15 02:01:56.000595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:56.000685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.376 [2024-05-15 02:01:56.000710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.000835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.000945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.000973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.001091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.001269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.001296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.001393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.001546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.001571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.001733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.001867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.001895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.002031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.002157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.002185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.002325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.002450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.002475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.002631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.002789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.002817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.003002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.003286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.003557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.003852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.003978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.004142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.004439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.004699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.004877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.005051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.005309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.005575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.005805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.005972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.006151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.006287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.006313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.006427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.006571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.006596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.006700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.006839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.006865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.006993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.007296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.007562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.007874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.007989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.008110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.008232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.008257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.008381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.008537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.008563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.377 [2024-05-15 02:01:56.008667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.008825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.377 [2024-05-15 02:01:56.008851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.377 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.008938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.009201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.009464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.009718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.009835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.009932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.010250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.010576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.010835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.010960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.011095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.011231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.011259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.011377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.011502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.011536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.011658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.011804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.011829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.011924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.012202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.012466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.012767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.012936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.013053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.013305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.013619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.013883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.013999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.014130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.014296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.014322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.014421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.014562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.014590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.014711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.014854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.014879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.014988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.015148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.015177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.015324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.015434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.015462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.378 qpair failed and we were unable to recover it. 00:33:32.378 [2024-05-15 02:01:56.015620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.015731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.378 [2024-05-15 02:01:56.015757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.015890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.016243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.016495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.016744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.016867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.016965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.017227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.017444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.017797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.017910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.018057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.018315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.018552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.018867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.018986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.019125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.019379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.019626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.019799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.019922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.020226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.020533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.020817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.020968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.021093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.021192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.021233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.021358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.021451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.021477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.021609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.021776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.021801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.021929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.022076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.022105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.379 qpair failed and we were unable to recover it. 00:33:32.379 [2024-05-15 02:01:56.022242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.022338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.379 [2024-05-15 02:01:56.022364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.022479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.022598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.022624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.022752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.022867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.022893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.022994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.023281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.023587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.023861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.023980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.024097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.024465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.024706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.024887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.025031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.025347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.025589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.025791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.025936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.026059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.026329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.026540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.026794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.026916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.027080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.027194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.027225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.027382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.027500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.027525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.027641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.027796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.027824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.380 qpair failed and we were unable to recover it. 00:33:32.380 [2024-05-15 02:01:56.027974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.028094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.380 [2024-05-15 02:01:56.028119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.028255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.028403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.028428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.028568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.028724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.028752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.028927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.029244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.029512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.029757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.029882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.030010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.030159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.030187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.030374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.030522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.030547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.030699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.030819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.030845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.030983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.031113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.031138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.031264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.031381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.031411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.031554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.031696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.031721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.031863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.032211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.032545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.032789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.032947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.033128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.033211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.033264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.033364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.033463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.033488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.033607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.033751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.033779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.033918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.034205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.034492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.034784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.034956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.035101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.035225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.035251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.035375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.035499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.035524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.035625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.035766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.035791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.381 qpair failed and we were unable to recover it. 00:33:32.381 [2024-05-15 02:01:56.035919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.381 [2024-05-15 02:01:56.036018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.036212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.036512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.036822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.036970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.037066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.037156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.037181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.037361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.037467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.037495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.037649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.037744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.037769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.037883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.038172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.038449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.038670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.038794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.038950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.039236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.039534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.039799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.039965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.040107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.040327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.040599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.040873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.040987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.041104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.041380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.041711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.041824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.041954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.042291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.042561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.042842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.042988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.043014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.382 qpair failed and we were unable to recover it. 00:33:32.382 [2024-05-15 02:01:56.043140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.382 [2024-05-15 02:01:56.043271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.043301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.043471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.043568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.043594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.043703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.043836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.043864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.044044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.044307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.044578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.044866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.044997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.045167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.045321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.045347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.045468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.045559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.045601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.045778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.045922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.045947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.046069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.046213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.046243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.046364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.046452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.046477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.046638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.046768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.046793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.046924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.047123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.047441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.047696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.047813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.047933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.048240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.048537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.048840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.048994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.049145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.049496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.049739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.049849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.049938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.050049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.050078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.383 [2024-05-15 02:01:56.050194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.050365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.383 [2024-05-15 02:01:56.050391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.383 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.050485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.050606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.050632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.050731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.050902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.050930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.051071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.051321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.051557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.051848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.051976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.052094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.052227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.052255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.052394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.052533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.052561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.052700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.052814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.052843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.052961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.053253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.053521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.053762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.053938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.054027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.054251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.054504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.054738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.054908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.055048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.055192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.055222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.055347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.055520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.055549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.055679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.055818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.055846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.056012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.056134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.056159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.056334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.056456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.056484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.056658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.056779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.056804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.056924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.057046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.057071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.057164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.057277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.057303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.384 [2024-05-15 02:01:56.057470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.057602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.384 [2024-05-15 02:01:56.057627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.384 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.057770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.057890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.057915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.058054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.058358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.058613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.058849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.058963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.059095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.059239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.059280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.059401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.059531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.059557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.059686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.059791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.059819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.059916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.060266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.060537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.060836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.060984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.061083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.061204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.061235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.061380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.061556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.061581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.061675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.061796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.061821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.061913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.062201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.062540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.062797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.062914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.063033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.063154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.063183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.385 [2024-05-15 02:01:56.063329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.063453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.385 [2024-05-15 02:01:56.063478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.385 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.063630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.063727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.063752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.063842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.063964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.063989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.064117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.064264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.064292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.064434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.064565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.064590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.064735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.064861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.064889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.065050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.065243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.065287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.065383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.065507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.065531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.065648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.065766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.065793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.065925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.066208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.066473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.066749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.066920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.067063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.067272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.067563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.067793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.067935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.068051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.068194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.068227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.068328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.068465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.068493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.068662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.068760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.068786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.068934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.069182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.069420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.069736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.069869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.069997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.070331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.070544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.386 qpair failed and we were unable to recover it. 00:33:32.386 [2024-05-15 02:01:56.070819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.386 [2024-05-15 02:01:56.070984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.071146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.071444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.071721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.071880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.072028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.072236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.072479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.072813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.072935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.073055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.073163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.073190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.073368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.073484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.073509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.073653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.073749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.073775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.073912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.074195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.074495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.074753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.074883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.075052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.075199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.075230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.075349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.075441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.075467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.075562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.075704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.075732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.075874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.076157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.076382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.076693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.076818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.076940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.077187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.077454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.387 qpair failed and we were unable to recover it. 00:33:32.387 [2024-05-15 02:01:56.077784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.387 [2024-05-15 02:01:56.077876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.077901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.078029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.078309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.078583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.078858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.078985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.079138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.079441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.079678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.079869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.079987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.080112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.080141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.080312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.080413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.080438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.080564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.080699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.080727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.080878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.081194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.081496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.081797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.081968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.082091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.082185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.082211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.082379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.082509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.082537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.082645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.082771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.082799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.082971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.083211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.083466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.083796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.083920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.084080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.084186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.084213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.084373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.084467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.084493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.084638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.084743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.084768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.084886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.085037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.085062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.085204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.085329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.085357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.388 qpair failed and we were unable to recover it. 00:33:32.388 [2024-05-15 02:01:56.085507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.388 [2024-05-15 02:01:56.085603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.085629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.085753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.085891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.085919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.086061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.086202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.086238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.086374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.086518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.086543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.086694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.086860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.086888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.087043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.087175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.087204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.087387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.087484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.087509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.087652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.087753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.087780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.087944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.088269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.088510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.088752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.088943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.089065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.089159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.089184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.089308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.089421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.089446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.089576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.089721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.089763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.089854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.090187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.090457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.090739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.090882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.091019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.091340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.091619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.091889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.091990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.092199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.092475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.092701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.092876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.093002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.093102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.093144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.389 qpair failed and we were unable to recover it. 00:33:32.389 [2024-05-15 02:01:56.093270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.389 [2024-05-15 02:01:56.093367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.093392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.093516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.093676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.093704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.093849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.093937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.093963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.094064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.094184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.094211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.094338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.094428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.094453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.094579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.094724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.094749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.094899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.095182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.095490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.095741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.095857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.095944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.096225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.096487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.096769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.096891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.097058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.097371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.097639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.097848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.097994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.098171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.098426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.390 qpair failed and we were unable to recover it. 00:33:32.390 [2024-05-15 02:01:56.098729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.390 [2024-05-15 02:01:56.098899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.099005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.099138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.099167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.099299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.099390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.099415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.099508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.099653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.099682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.099841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.100179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.100427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.100688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.100876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.101043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.101191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.101225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.101339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.101438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.101463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.101590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.101704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.101730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.101849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.102165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.102440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.102708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.102886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.102998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.103139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.103164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.103326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.103473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.103514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.103612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.103749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.103776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.103956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.104242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.104539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.104816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.104934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.105077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.105170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.105196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.105362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.105490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.105520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.105656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.105761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.105789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.105924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.106014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.106056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.106157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.106254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.106280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.391 [2024-05-15 02:01:56.106402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.106517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.391 [2024-05-15 02:01:56.106542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.391 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.106694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.106800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.106830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.106928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.107203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.107487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.107728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.107893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.108016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.108304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.108594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.108871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.108971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.109104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.109430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.109702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.109849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.110018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.110340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.110623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.110884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.110992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.111105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.111447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.111771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.111889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.112028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.112182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.112366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.112486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.112511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.112649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.112780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.112808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.112952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.113250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.113498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.113764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.113893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.114011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.114105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.114130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.392 qpair failed and we were unable to recover it. 00:33:32.392 [2024-05-15 02:01:56.114253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.392 [2024-05-15 02:01:56.114355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.114383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.114521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.114684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.114712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.114837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.115199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.115450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.115713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.115884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.116005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.116183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.116211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.116412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.116526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.116551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.116665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.116819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.116843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.116945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.117115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.117143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.117328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.117475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.117501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.117655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.117746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.117771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.117916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.118252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.118501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.118794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.118913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.119030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.119297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.119584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.119846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.119993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.120128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.120307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.120332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.120458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.120588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.120617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.120750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.120878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.120905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.121049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.121162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.121187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.121342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.121494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.121519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.393 qpair failed and we were unable to recover it. 00:33:32.393 [2024-05-15 02:01:56.121622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.393 [2024-05-15 02:01:56.121752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.121780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.121916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.122185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.122437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.122711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.122873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.123054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.123295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.123513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.123785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.123936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.124085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.124241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.124271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.124387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.124529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.124554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.124697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.124820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.124847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.124986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.125235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.125536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.125798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.125977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.126129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.126281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.126310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.126407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.126543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.126571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.126708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.126835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.126860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.127050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.127234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.127260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.127364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.127452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.127477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.127575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.127751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.127776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.127922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.128157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.128456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.128767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.128955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.129108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.129256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.129283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.129418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.129526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.129554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.129729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.129864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.394 [2024-05-15 02:01:56.129893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.394 qpair failed and we were unable to recover it. 00:33:32.394 [2024-05-15 02:01:56.130031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.130153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.130181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.130318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.130437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.130463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.130604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.130703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.130730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.130865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.131179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.131447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.131685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.131822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.131969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.132285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.132557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.132792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.132907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.133010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.133266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.133552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.133820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.133987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.134154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.134265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.134293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.134407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.134582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.134607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.134729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.134851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.395 [2024-05-15 02:01:56.134876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.395 qpair failed and we were unable to recover it. 00:33:32.395 [2024-05-15 02:01:56.134994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.135356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.135612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.135894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.135993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.136141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.136409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.136714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.136887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.136986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.137246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.137544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.137783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.137942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.138065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.138209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.138243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.138373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.138476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.138506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.138668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.138837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.138865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.139009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.139247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.139538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.139863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.139985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.140172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.140516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.140808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.140997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.141146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.141281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.141314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.141457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.141588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.141617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.141761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.141854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.141879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.142050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.142173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.142198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.142332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.142475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.142503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.142601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.142703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.142732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.142860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.143003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.143028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.396 qpair failed and we were unable to recover it. 00:33:32.396 [2024-05-15 02:01:56.143119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.396 [2024-05-15 02:01:56.143220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.143245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.143416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.143541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.143569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.143679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.143813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.143841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.143958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.144212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.144577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.144878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.144993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.145135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.145244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.145289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.145437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.145573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.145601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.145758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.145884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.145912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.146019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.146192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.146221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.146370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.146493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.146519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.146667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.146810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.146834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.146986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.147272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.147568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.147787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.147933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.148028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.148298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.148549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.148838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.148972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.149165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.149488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.149753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.149946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.150072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.150186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.150211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.150425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.150559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.397 [2024-05-15 02:01:56.150587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.397 qpair failed and we were unable to recover it. 00:33:32.397 [2024-05-15 02:01:56.150793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.150926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.150953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.151090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.151293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.151319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.151445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.151557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.151598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.151716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.151849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.151877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.151989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.152254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.152517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.152849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.152978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.153157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.153397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.153695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.153878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.154007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.154146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.154175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.154321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.154449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.154474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.154569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.154717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.154742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.154887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.155151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.155455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.155747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.155913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.156050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.156182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.156211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.156390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.156538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.156564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.156699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.156838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.156866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.157025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.157158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.157186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.157363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.157513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.157539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.157663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.157825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.157853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.158011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.158279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.158572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.158850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.158983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.159009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.398 [2024-05-15 02:01:56.159135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.159254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.398 [2024-05-15 02:01:56.159279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.398 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.159369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.159510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.159535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.159672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.159815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.159843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.159976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.160110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.160138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.160258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.160392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.160420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.160557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.160701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.160726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.160855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.161204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.161555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.161828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.161949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.162075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.162360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.162602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.162900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.162997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.163116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.163370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.163658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.163873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.163976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.164089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.164374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.164656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.164791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.164935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.165243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.165514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.165787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.165925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.166034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.166131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.166158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.399 qpair failed and we were unable to recover it. 00:33:32.399 [2024-05-15 02:01:56.166305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.399 [2024-05-15 02:01:56.166404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.400 [2024-05-15 02:01:56.166429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.400 qpair failed and we were unable to recover it. 00:33:32.400 [2024-05-15 02:01:56.166533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.400 [2024-05-15 02:01:56.166628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.400 [2024-05-15 02:01:56.166653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.400 qpair failed and we were unable to recover it. 00:33:32.400 [2024-05-15 02:01:56.166749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.400 [2024-05-15 02:01:56.166839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.400 [2024-05-15 02:01:56.166881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.400 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.167016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.167290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.167559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.167801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.167947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.168078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.168366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.168611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.168840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.168985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.169116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.169245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.169274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.169402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.169506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.169531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.169668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.169780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.169805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.169906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.170191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.170448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.170749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.170883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.170977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.171225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.171537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.171763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.171888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.171984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.172225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.172500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.172770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.172941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.173035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.173252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.173526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.173756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.173907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.174011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.174134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.174162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.174279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.174367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.401 [2024-05-15 02:01:56.174392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.401 qpair failed and we were unable to recover it. 00:33:32.401 [2024-05-15 02:01:56.174504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.174600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.174625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.174716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.174831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.174856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.174956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.175244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.175517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.175782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.175937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.176037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.176178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.176205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.176317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.176452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.176480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.176612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.176731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.176756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.176900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.177198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.177458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.177721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.177847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.178296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.178607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.178842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.178951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.179065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.179300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.179630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.179856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.179981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.180144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.180407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.180683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.180806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.180915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.181179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.181411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.181661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.181802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.181927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.182079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.182103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.402 qpair failed and we were unable to recover it. 00:33:32.402 [2024-05-15 02:01:56.182201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.402 [2024-05-15 02:01:56.182297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.182323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.182428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.182542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.182570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.182672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.182815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.182840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.182932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.183187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.183503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.183794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.183921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.184046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.184292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.184580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.184842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.184974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.185122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.185246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.185272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.185363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.185476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.185504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.185608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.185729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.185758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.185891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.186203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.186495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.186724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.186850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.186957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.187247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.187499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.187781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.187907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.188042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.188199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.188247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.188367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.188462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.188491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.188607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.188745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.188773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.188920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.189166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.189467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.189671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.189804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.189916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.190050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.403 [2024-05-15 02:01:56.190078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.403 qpair failed and we were unable to recover it. 00:33:32.403 [2024-05-15 02:01:56.190180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.190302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.190330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.190481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.190619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.190644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.190761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.190892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.190920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.191019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.191272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.191505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.191726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.191884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.191994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.192237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.192519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.192739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.192911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.193038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.193238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.193507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.193774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.193928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.194061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.194333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.194572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.194831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.194975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.195112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.195356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.195565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.195797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.195928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.196068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.196176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.196203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.196327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.196489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.196517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.196629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.196743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.196768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.196902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.197240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.197492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.404 qpair failed and we were unable to recover it. 00:33:32.404 [2024-05-15 02:01:56.197743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.404 [2024-05-15 02:01:56.197857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.197882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.197978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.198209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.198457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.198725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.198842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.198947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.199176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.199422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.199730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.199877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.199989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.200235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.200516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.200794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.200911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.201025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.201260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.201517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.201816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.201966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.202130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.202261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.202288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.202390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.202514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.202542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.202674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.202777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.202806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.202920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.203182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.203417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.203682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.405 [2024-05-15 02:01:56.203892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.203990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.405 [2024-05-15 02:01:56.204014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.405 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.204134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.204364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.204615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.204894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.204988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.205124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.205420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.205703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.205885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.206003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.206271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.206516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.206817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.206938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.207085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.207291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.207550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.207813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.207951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.208069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.208289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.208494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.208788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.208917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.209031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.209283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.209556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.209849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.209979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.210126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.210390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.210663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.210787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.210900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.211196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.211435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.406 qpair failed and we were unable to recover it. 00:33:32.406 [2024-05-15 02:01:56.211763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.406 [2024-05-15 02:01:56.211854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.211879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.212033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.212275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.212489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.212752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.212881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.212985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.213263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.213483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.213722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.213906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.214005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.214290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.214526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.214763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.214882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.214999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.215245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.215488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.215762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.215900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.216014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.216299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.216540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.216802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.216976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.217091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.217237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.217280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.217381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.217499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.217524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.217640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.217747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.217774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.217910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.218188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.218436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.218650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.218832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.218938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.219045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.219073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.407 [2024-05-15 02:01:56.219207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.219343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.407 [2024-05-15 02:01:56.219368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.407 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.219474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.219593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.219617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.219714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.219838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.219865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.219974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.220287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.220555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.220767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.220915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.221055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.221367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.221636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.221849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.221992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.222106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.222240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.222269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.222405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.222564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.222596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.222717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.222818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.222843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.222961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.223281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.223527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.223771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.223893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.224032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.224308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.224549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.224856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.224978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.225128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.225410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.225645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.225811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.225925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.226183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.226521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.226823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.226944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.227063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.227158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.408 [2024-05-15 02:01:56.227182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.408 qpair failed and we were unable to recover it. 00:33:32.408 [2024-05-15 02:01:56.227276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.227395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.227420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.227515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.227609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.227634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.227756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.227858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.227887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.228008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.228213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.228429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.228710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.228840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.228992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.229086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.229111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.229244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.229374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.229402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.229516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.229621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.229649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.229783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.232361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.232390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.232490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.232657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.232685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.232799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.232895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.232924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.233020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.233241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.233458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.233693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.233819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.233938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.234185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.234435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.234726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.234848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.234955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.235189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.235431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.235735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.409 [2024-05-15 02:01:56.235921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.409 qpair failed and we were unable to recover it. 00:33:32.409 [2024-05-15 02:01:56.236028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.236302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.236572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.236887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.236979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.237124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.237426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.237672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.237795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.237893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.238145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.238367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.238616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.238899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.238998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.239147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.239463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.239698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.239850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.239997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.240211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.240465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.240779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.240918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.241029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.241333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.241574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.241812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.241942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.242064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.242301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.242527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.242781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.242962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.243072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.243191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.243221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.243320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.243412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.243437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.410 qpair failed and we were unable to recover it. 00:33:32.410 [2024-05-15 02:01:56.243539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.410 [2024-05-15 02:01:56.243658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.243685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.243820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.243963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.243988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.244074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.244321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.244559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.244819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.244958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.245091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.245385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.245627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.245864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.245985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.246156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.246373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.246609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.246804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.246917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.247060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.247204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.247238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.247400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.247490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.247515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.247618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.247759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.247784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.247903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.248159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.248388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.248713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.248881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.248968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.249212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.249445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.249738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.249882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.250016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.250257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.250487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.250749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.250883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.411 [2024-05-15 02:01:56.251015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.251118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.411 [2024-05-15 02:01:56.251146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.411 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.251290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.251389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.251414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.251537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.251625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.251649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.251737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.251904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.251932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.252040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.252296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.252544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.252754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.252880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.252972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.253188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.253441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.253711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.253909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.254029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.254139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.254168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.254326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.254448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.254473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.254594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.254732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.254760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.254875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.255103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8a0f0 (9): Bad file descriptor 00:33:32.412 [2024-05-15 02:01:56.255264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.255554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.255865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.255996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.256153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.256398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.256619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.256868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.256989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.257133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.257392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.257607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.257819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.257939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.258027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.258122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.258147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.258253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.258346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.258371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.258476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.258623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.412 [2024-05-15 02:01:56.258648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.412 qpair failed and we were unable to recover it. 00:33:32.412 [2024-05-15 02:01:56.258747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.258842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.258869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.258963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.259184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.259440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.259650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.259794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.259902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.260155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.260404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.260647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.260886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.260986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.261165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.261436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.261693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.261823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.261924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.262173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.262405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.262662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.262829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.262943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.263263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.263533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2124000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.263831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.263994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.264138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.264399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.264639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.264855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.264976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.265094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.265327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.265582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.265797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.413 [2024-05-15 02:01:56.265921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.413 qpair failed and we were unable to recover it. 00:33:32.413 [2024-05-15 02:01:56.266052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.266325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.266573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.266793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.266911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.267020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.267247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.267476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.267751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.267897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.267986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.268197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.268432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.268723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.268876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.268995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.269309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.269586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.269797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.269921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.270013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.270253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.270472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.270731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.270849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.270943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.271191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.271426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.271694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.271815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.271906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.272019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.414 [2024-05-15 02:01:56.272045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.414 qpair failed and we were unable to recover it. 00:33:32.414 [2024-05-15 02:01:56.272166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.272390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.272629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.272872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.272987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.273088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.273187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.273212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.273345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.273474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.273499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.273618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.273768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.273793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.273916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.274123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.274384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.274604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.274775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.274901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.275131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.275381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.275598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.415 [2024-05-15 02:01:56.275847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.415 [2024-05-15 02:01:56.275961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.415 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.276081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.276331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.276577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.276803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.276927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.277025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.277250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.277473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.277721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.277842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.277962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.278210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.278538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.278807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.278941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.279034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.279277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.279568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.279773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.279919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.280013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.280103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.280128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.280234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.280364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.280390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.686 qpair failed and we were unable to recover it. 00:33:32.686 [2024-05-15 02:01:56.280485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.686 [2024-05-15 02:01:56.280616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.280642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.280766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.280892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.280917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.281012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.281272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.281495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.281725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.281839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.281930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.282194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.282429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.282647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.282850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.282982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.283077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.283351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.283608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.283836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.283950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.284071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.284374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.284627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.284832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.284983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.285075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.285196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.285226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.285346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.285429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.285455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.285573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.285699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.285724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.285850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.286142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.286475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.286749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.286900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.286996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.287093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.287118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.287225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.287318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.287343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.287459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.287581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.687 [2024-05-15 02:01:56.287608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.687 qpair failed and we were unable to recover it. 00:33:32.687 [2024-05-15 02:01:56.287712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.287880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.287907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.288067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.288346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.288564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.288808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.288949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.289072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.289315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.289573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.289827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.289968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.290053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.290299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.290539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.290778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.290927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.291020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.291137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.291162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.291307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.291463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.291505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.291730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.291834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.291859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.291988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.292306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.292545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.292768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.292938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.293087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.293213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.293245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.293365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.293486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.293512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.293634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.293757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.293783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.293879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.294156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.294389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.294670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.294820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.294939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.295064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.295091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.688 [2024-05-15 02:01:56.295223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.295340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.688 [2024-05-15 02:01:56.295384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.688 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.295492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.295651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.295676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.295764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.295860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.295887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.296037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.296311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.296563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.296836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.296981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.297124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.297387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.297736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.297891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.297977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.298172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.298415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.298710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.298827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.298977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.299239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.299517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.299750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.299899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.300014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.300277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.300532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.300735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.300847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.300994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.301269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.301515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.301779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.301932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.302026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.302277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.302599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.302894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.302990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.303016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.689 qpair failed and we were unable to recover it. 00:33:32.689 [2024-05-15 02:01:56.303136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.689 [2024-05-15 02:01:56.303281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.303399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.303622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.303847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.303964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.304086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.304342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.304589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.304833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.304979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.305094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.305224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.305251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.305366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.305519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.305548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.305689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.305784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.305809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.305926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.306175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.306455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.306771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.306920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.307010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.307276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.307538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.307781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.307953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.308053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.308293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.308570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.308841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.308985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.309112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.309237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.309263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.690 qpair failed and we were unable to recover it. 00:33:32.690 [2024-05-15 02:01:56.309399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.690 [2024-05-15 02:01:56.309549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.309595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.309737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.309859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.309885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.309981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.310257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.310604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.310846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.310969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.311065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.311333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.311618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.311866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.311988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.312082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.312228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.312255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.312378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.312497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.312539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.312638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.312770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.312796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.312890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.313134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.313356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.313622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.313838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.313982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.314102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.314327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.314540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.314845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.314957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.315101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.315393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.315633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.315869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.315985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.316153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.316385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.316635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.316815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.691 qpair failed and we were unable to recover it. 00:33:32.691 [2024-05-15 02:01:56.316945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.691 [2024-05-15 02:01:56.317094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.317119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.317222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.317377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.317420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.317536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.317688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.317731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.317880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.317973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.318128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.318431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.318646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.318790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.318882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.319120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.319393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.319640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.319757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.319914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.320190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.320463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.320764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.320931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.321062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.321155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.321180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.321340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.321507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.321537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.321689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.321796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.321822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.321942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.322164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.322432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.322750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.322897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.322993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.323266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.323515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.323760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.323909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.324050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.324169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.324194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.324383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.324540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.324573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.324744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.324884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.324914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.692 [2024-05-15 02:01:56.325060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.325151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.692 [2024-05-15 02:01:56.325177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.692 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.325334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.325435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.325463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.325620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.325752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.325781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.325943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.326177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.326462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.326744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.326908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.327073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.327207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.327242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.327383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.327502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.327527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.327622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.327750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.327779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.327875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.328168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.328497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.328799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.328944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.329082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.329188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.329224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.329374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.329514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.329542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.329662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.329757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.329801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.329930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.330197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.330438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.330695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.330863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.331014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.331149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.331178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.331308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.331432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.331457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.331619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.331784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.331832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.331948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.332246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.332530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.332776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.332954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.333173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.333319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.333345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.333444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.333588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.693 [2024-05-15 02:01:56.333617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.693 qpair failed and we were unable to recover it. 00:33:32.693 [2024-05-15 02:01:56.333728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.333859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.333900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.334040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.334171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.334197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.334364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.334526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.334554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.334704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.334805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.334832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.334944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.335121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.335149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.335276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.335399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.335425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.335553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.335661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.335688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.335845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.336163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.336406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.336649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.336780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.336940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.337209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.337502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.337727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.337890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.338064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.338338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.338555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.338896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.338984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.339128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.339445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.339754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.339939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.340059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.340334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.340612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.340858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.340985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.341105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.341378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.341656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.341803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.694 qpair failed and we were unable to recover it. 00:33:32.694 [2024-05-15 02:01:56.341942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.342074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.694 [2024-05-15 02:01:56.342101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.342233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.342367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.342391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.342513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.342604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.342628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.342794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.342961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.342985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.343101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.343201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.343233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.343361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.343463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.343487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.343599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.343758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.343785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.343935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.344234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.344477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.344789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.344970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.345083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.345201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.345231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.345401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.345506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.345533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.345665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.345792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.345832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.345932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.346195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.346503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.346758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.346909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.347006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.347211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.347430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.347660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.347803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.347923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.348146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.348406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.348629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.695 [2024-05-15 02:01:56.348769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.695 qpair failed and we were unable to recover it. 00:33:32.695 [2024-05-15 02:01:56.348897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.348996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.349144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.349414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.349705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.349819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.349940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.350292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.350508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.350750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.350945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.351077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.351258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.351282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.351417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.351546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.351571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.351663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.351784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.351809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.351934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.352228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.352502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.352863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.352983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.353106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.353327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.353608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.353764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.353916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.354176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.354454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.354685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.354802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.354946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.355289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.355553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.355836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.355994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.356128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.356461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.356740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.356902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.696 qpair failed and we were unable to recover it. 00:33:32.696 [2024-05-15 02:01:56.357034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.357135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.696 [2024-05-15 02:01:56.357163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.357282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.357433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.357458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.357574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.357724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.357748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.357838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.357985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.358112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.358414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.358698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.358865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.358991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.359271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.359528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.359868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.359984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.360082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.360179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.360227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.360374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.360539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.360574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.360719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.360821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.360845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.360943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.361223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.361460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.361721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.361855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.361962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.362227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.362452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.362778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.362968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.363064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.363335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.363592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.363901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.363993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.364129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.364405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.364688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.364837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.364980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.365137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.365165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.697 qpair failed and we were unable to recover it. 00:33:32.697 [2024-05-15 02:01:56.365272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.697 [2024-05-15 02:01:56.365407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.365435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.365553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.365671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.365696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.365853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.365978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.366182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.366428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.366645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.366785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.366908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.367200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.367441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.367779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.367919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.368040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.368314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.368584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.368869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.368997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.369148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.369385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.369672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.369838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.369976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.370306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.370604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.370828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.370976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.371146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.371403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.371683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.371848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.371979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.372267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.372593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.372867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.372989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.373015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.373129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.373249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.373275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.698 [2024-05-15 02:01:56.373445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.373603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.698 [2024-05-15 02:01:56.373631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.698 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.373755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.373919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.373946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.374079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.374203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.374246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.374365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.374498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.374525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.374678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.374838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.374868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.374990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.375205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.375480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.375757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.375927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.376053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.376209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.376243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.376344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.376475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.376502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.376644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.376742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.376767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.376917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.377242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.377509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.377794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.377920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.378024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.378153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.378179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.378326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.378419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.378444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.378613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.378771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.378799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.378952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.379267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.379499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.379810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.379940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.380069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.380156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.380180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.699 [2024-05-15 02:01:56.380303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.380403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.699 [2024-05-15 02:01:56.380431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.699 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.380547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.380682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.380709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.380870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.380984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.381153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.381474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.381739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.381889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.382004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.382315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.382577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.382867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.382998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.383026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.383146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.383275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.383303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.383470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.383584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.383625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.383765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.383978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.384140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.384403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.384716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.384915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.385047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.385180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.385207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.385335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.385552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.385594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.385762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.385882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.385906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.386051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.386276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.386549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.386886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.386978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.387002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.387100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.387221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.387246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.700 qpair failed and we were unable to recover it. 00:33:32.700 [2024-05-15 02:01:56.387364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.387512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.700 [2024-05-15 02:01:56.387537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.387685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.387789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.387816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.387927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.388208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.388494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.388793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.388931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.389080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.389188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.389230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.389379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.389505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.389533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.389669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.389771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.389795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.389886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.390148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.390414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.390669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.390835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.390930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.391190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.391510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.391791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.391916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.392065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.392184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.392213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.392384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.392492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.392518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.392623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.392756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.392782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.392927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.393183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.393528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.393864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.393984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.394083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.394200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.394232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.394360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.394501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.394526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.394622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.394769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.394792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.394913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.395051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.395078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.701 qpair failed and we were unable to recover it. 00:33:32.701 [2024-05-15 02:01:56.395190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.395354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.701 [2024-05-15 02:01:56.395382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.395556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.395677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.395703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.395816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.395954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.395979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.396072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.396207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.396248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.396429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.396528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.396552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.396643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.396739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.396762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.396909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.397144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.397378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.397624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.397797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.397925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.398193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.398460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.398741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.398909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.399040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.399196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.399226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.399354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.399479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.399503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.399604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.399701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.399726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.399875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.400204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.400442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.400668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.400838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.400963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.401086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.401113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.401267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.401392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.401416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.401564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.401733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.401760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.702 [2024-05-15 02:01:56.401865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.402021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.702 [2024-05-15 02:01:56.402048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.702 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.402187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.402306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.402330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.402508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.402663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.402690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.402847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.402952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.402979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.403104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.403248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.403274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.403364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.403485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.403513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.403654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.403793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.403820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.403943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.404187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.404522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.404768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.404893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.405019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.405124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.405150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.405265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.405401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.405427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.405595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.405716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.405740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.405884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.406154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.406473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.406812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.406981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.407144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.407315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.407341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.407465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.407562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.407586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.407699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.407837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.407863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.407968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.408265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.408541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.408838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.408995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.409162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.409287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.409313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.409414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.409550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.409577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.409739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.409883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.409907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.410006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.410127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.410152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.703 qpair failed and we were unable to recover it. 00:33:32.703 [2024-05-15 02:01:56.410278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.703 [2024-05-15 02:01:56.410420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.410444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.410576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.410678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.410701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.410819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.410952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.410978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.411105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.411279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.411305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.411425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.411538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.411565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.411717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.411840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.411864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.411994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.412272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.412531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.412755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.412877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.412986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.413257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.413479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.413747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.413875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.414050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.414270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.414507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.414761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.414905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.415063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.415185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.415226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.415387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.415477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.415503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.415631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.415728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.415753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.415873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.416015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.416040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.704 qpair failed and we were unable to recover it. 00:33:32.704 [2024-05-15 02:01:56.416141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.704 [2024-05-15 02:01:56.416258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.416302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.416429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.416536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.416562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.416683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.416805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.416830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.416962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.417223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.417504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.417762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.417895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.417987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.418265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.418560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.418832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.418954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.419100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.419240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.419270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.419376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.419509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.419539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.419701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.419825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.419851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.419977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.420247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.420513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.420783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.420958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.421074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.421207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.421242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.421391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.421491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.421516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.421623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.421795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.421823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.421987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.422271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.422546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.705 qpair failed and we were unable to recover it. 00:33:32.705 [2024-05-15 02:01:56.422808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.705 [2024-05-15 02:01:56.422937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.423083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.423337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.423608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.423853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.423980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.424079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.424294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.424564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.424804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.424952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.425071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.425356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.425598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.425860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.425990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.426088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.426327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.426538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.426805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.426965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.427064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.427175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.427203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.427324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.427470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.427496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.427592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.427715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.427741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.427893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.428135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.428396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.428670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.428833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.428932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.429038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.429067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.429219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.429322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.706 [2024-05-15 02:01:56.429348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.706 qpair failed and we were unable to recover it. 00:33:32.706 [2024-05-15 02:01:56.429458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.429573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.429601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.429747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.429872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.429897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.429995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.430226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.430502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.430766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.430888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.431008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.431098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.431128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.431251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.431421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.431449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.431601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.431717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.431740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.431909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.432178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.432433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.432711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.432903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.433000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.433116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.433157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.433309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.433431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.433456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.433615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.433772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.433821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.433977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.434259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.434487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.434762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.434947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.435115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.435212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.435247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.435414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.435518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.435544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.435677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.435788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.435816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.435945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.436182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.436495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.436833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.436983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.707 qpair failed and we were unable to recover it. 00:33:32.707 [2024-05-15 02:01:56.437105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.707 [2024-05-15 02:01:56.437236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.437266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.437381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.437488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.437517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.437637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.437781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.437805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.437895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.437979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.438128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.438409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.438655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.438891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.438976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.439110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.439344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.439650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.439802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.439945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.440191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.440486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.440749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.440870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.440958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.441202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.441485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.441727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.441864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.441996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.442264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.442526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.442771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.442930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.443059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.443191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.443223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.443325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.443492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.443519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.443636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.443734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.443758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.443853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.444172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.444432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.708 [2024-05-15 02:01:56.444738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.708 [2024-05-15 02:01:56.444855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.708 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.445000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.445322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.445585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.445827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.445971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.446102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.446210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.446250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.446361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.446498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.446523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.446656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.446752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.446781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.446937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.447190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.447462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.447751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.447875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.447972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.448270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.448572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.448809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.448928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.449030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.449236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.449519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.449780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.449950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.450068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.450360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.450638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.450839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.450985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.451009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.451098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.451188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.451213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.451372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.451507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.709 [2024-05-15 02:01:56.451534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.709 qpair failed and we were unable to recover it. 00:33:32.709 [2024-05-15 02:01:56.451675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.451809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.451836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.451954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.452212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.452551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.452820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.452946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.453052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.453190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.453225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.453443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.453601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.453650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.453819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.453919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.453943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.454070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.454161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.454202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.454367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.454467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.454492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.454606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.454753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.454778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.454905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.455200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.455475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.455734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.455849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.455953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.456222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.456466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.456743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.456874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.456999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.457279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.457586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.457836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.457981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.458077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.458295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.458511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.458766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.458932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.459065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.459200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.459234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.459387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.459490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.459515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.459632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.459749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.459774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.459889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.460005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.460032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.460171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.460272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.710 [2024-05-15 02:01:56.460297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.710 qpair failed and we were unable to recover it. 00:33:32.710 [2024-05-15 02:01:56.460419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.460515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.460557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.460662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.460788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.460815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.460948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.461201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.461473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.461743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.461863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.462003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.462300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.462551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.462768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.462928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.463086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.463184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.463209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.463356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.463467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.463492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.463617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.463746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.463774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.463877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.464151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.464399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.464673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.464906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.464997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.465139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.465427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.465727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.465851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.465971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.466251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.466580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.466816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.466953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.467075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.467321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.467606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.467872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.467997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.468132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.468342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.468675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.468840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.468982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.469103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.469128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.469247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.469351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.711 [2024-05-15 02:01:56.469378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.711 qpair failed and we were unable to recover it. 00:33:32.711 [2024-05-15 02:01:56.469512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.469647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.469675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.469795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.469898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.469924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.470019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.470162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.470190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.470357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.470455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.470480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.470571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.470718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.470742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.470864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.471202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.471439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.471677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.471810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.471911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.472130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.472391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.472663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.472798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.472956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.473178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.473470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.473745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.473860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.473961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.474265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.474555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.474801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.474925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.475069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.475207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.475242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.475386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.475503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.475528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.475685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.475775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.475799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.475890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.476156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.476405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.476699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.476815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.476921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.477208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.477498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.477745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.477858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.478039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.478132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.478157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.478278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.478426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.478453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.712 qpair failed and we were unable to recover it. 00:33:32.712 [2024-05-15 02:01:56.478600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.478700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.712 [2024-05-15 02:01:56.478724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.478829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.478949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.478974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.479076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.479262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.479291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.479443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.479559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.479584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.479697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.479817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.479845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.479948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.480173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.480420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.480636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.480823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.480942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.481173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.481487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.481725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.481868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.481963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.482251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.482555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.482858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.482986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.483093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.483205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.483251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.483377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.483502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.483527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.483642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.483748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.483775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.483884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.484157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.484386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.484603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.484764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.484900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.485203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.485498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.485743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.485862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.486010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.486116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.486144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.486275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.486382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.486409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.713 qpair failed and we were unable to recover it. 00:33:32.713 [2024-05-15 02:01:56.486530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.486623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.713 [2024-05-15 02:01:56.486647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.486788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.486917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.486944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.487090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.487244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.487270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.487412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.487532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.487557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.487662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.487825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.487852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.487980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.488258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.488530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.488799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.488963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.489110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.489228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.489254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.489402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.489532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.489561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.489692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.489849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.489877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.489999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.490231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.490496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.490798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.490943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.491037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.491293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.491501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.491770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.491935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.492116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.492206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.492254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.492355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.492468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.492493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.492649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.492746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.492772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.492936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.493193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.493425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.493728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.493859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.493979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.494241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.494543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.494841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.494970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.495139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.495410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.495718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.495868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.496013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.496156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.496183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.714 qpair failed and we were unable to recover it. 00:33:32.714 [2024-05-15 02:01:56.496330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.714 [2024-05-15 02:01:56.496449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.496474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.496567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.496712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.496737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.496828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.496968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.496995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.497124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.497271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.497299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.497426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.497523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.497547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.497646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.497741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.497766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.497890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.498223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.498505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.498779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.498954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.499099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.499245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.499289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.499450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.499606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.499633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.499800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.499941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.499969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.500083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.500213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.500245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.500365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.500472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.500501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.500634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.500751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.500778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.500925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.501186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.501461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.501738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.501882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.502001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.502312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.502551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.502799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.502933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.503036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.503299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.503562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.503816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.503986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.504112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.504204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.504237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.504373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.504480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.504507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.504718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.504817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.504845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.504984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.505254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.505501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.715 [2024-05-15 02:01:56.505788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.715 [2024-05-15 02:01:56.505933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.715 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.506074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.506286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.506315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.506459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.506583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.506611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.506753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.506873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.506898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.507008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.507319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.507566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.507812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.507928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.508118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.508272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.508297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.508478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.508624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.508649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.508772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.508934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.508962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.509108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.509362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.509584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.509820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.509995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.510163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.510309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.510352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.510460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.510568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.510595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.510727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.510828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.510870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.511074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.511240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.511284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.511422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.511565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.511592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.511751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.511879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.511907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.512078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.512198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.512228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.512375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.512517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.512544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.512664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.512796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.512823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.512961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.513239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.513546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.513817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.513942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.514062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.514169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.514196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.514338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.514445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.514473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.514622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.514742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.514767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.514883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.515191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.515493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.515781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.515951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.516080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.516245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.516273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.716 [2024-05-15 02:01:56.516408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.516553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.716 [2024-05-15 02:01:56.516578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.716 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.516726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.516875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.516900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.517016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.517296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.517522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.517784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.517905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.518028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.518141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.518165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.518314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.518491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.518516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.518664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.518754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.518780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.518926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.519188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.519491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.519822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.519947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.520068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.520235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.520263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.520394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.520524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.520552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.520695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.520793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.520822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.520911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.521182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.521497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.521811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.521984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.522117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.522262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.522289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.522389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.522531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.522556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.522645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.522741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.522766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.522885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.523143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.523455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.523806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.523973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.524095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.524223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.524248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.524376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.524463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.524488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.717 [2024-05-15 02:01:56.524606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.524727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.717 [2024-05-15 02:01:56.524751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.717 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.524847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.524936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.524962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.525109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.525247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.525276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.525379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.525531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.525556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.525641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.525761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.525785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.525892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.526155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.526483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.526726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.526901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.527028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.527276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.527565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.527781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.527964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.528103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.528236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.528262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.528405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.528537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.528565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.528670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.528830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.528857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.529003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.529226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.529488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.529824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.529966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.530112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.530205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.530242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.530336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.530448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.530491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.530615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.530702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.530726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.530874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.531135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.531440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.531727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.531885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.531994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.532121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.532149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.532279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.532414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.532439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.532564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.532716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.532743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.532899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.533041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.533066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.533163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.533257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.533283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.718 [2024-05-15 02:01:56.533389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.533486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.718 [2024-05-15 02:01:56.533510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.718 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.533605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.533751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.533776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.533902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.533989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.534113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.534445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.534716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.534843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.534957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.535193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.535493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.535732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.535924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.536084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.536226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.536254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.536431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.536526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.536551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.536697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.536823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.536851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.536976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.537131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.537158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.537291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.537438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.537480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.537588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.537738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.537762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.537914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.538273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.538526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.538771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.538932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.539045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.539193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.539237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.539340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.539476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.539504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.539636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.539765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.539808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.539967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.540265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.540508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.540846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.540994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.541091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.541204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.541236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.541331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.541488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.541513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.541610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.541762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.541787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.541925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.542200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.542508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.542761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.719 [2024-05-15 02:01:56.542927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.719 qpair failed and we were unable to recover it. 00:33:32.719 [2024-05-15 02:01:56.543072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.543207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.543251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.543419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.543510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.543534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.543659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.543774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.543801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.543960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.544087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.544115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.544273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.544390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.544415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.544589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.544726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.544753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.544860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.545173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.545470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.545775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.545899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.546049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.546144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.546169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.546350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.546449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.546477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.546614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.546704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.546729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.546851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.547147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.547473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.547789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.547914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.548039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.548184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.548211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.548331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.548464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.548491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.548636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.548757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.548783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.548890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.549189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.549475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.549823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.549980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.550101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.550260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.550291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.550413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.550544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.550570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.550666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.550828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.550861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.550999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.551172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.551198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.551317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.551444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.551469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.551577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.551725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.551754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.551882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.552151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.552395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.552685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.552813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.552927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.553048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.553072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.553198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.553329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.720 [2024-05-15 02:01:56.553354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.720 qpair failed and we were unable to recover it. 00:33:32.720 [2024-05-15 02:01:56.553488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.553628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.553653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.553774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.553871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.553895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.554034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.554340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.554631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.554852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.554989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.555109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.555245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.555275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.555444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.555573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.555598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.555692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.555897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.555925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.556082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.556240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.556268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.556397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.556542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.556567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.556701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.556844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.556868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.557017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.557154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.557181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.557323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.557445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.557470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.557617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.557742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.557769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.557902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.558223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.558481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.558791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.558947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.559089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.559204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.559253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.559355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.559497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.559525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.559666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.559768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.559792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.559880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.560151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.560422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.560746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.560915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.561033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.561157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.561182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.561310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.561405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.561430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.721 qpair failed and we were unable to recover it. 00:33:32.721 [2024-05-15 02:01:56.561578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.721 [2024-05-15 02:01:56.561675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.561699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.561803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.562208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.562596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.562882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.562999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.563146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.563282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.563310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.563476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.563632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.563657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.563755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.563884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.563908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.564049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.564190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.564225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.564392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.564488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.564516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.564645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.564781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.564808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.564990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.565229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.565472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.565779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.565918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.566037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.566260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.566556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.566821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.566959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.567072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.567186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.567221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.567352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.567488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.567516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.567720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.567870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.567912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.568050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.568206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.568240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.568354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.568459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.568501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.568625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.568822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.568846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.568965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.569227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.569498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.569729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.569844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.569931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.570202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.570425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.570665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.570796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.570918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.571167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.571475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.571809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.722 [2024-05-15 02:01:56.571954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.722 qpair failed and we were unable to recover it. 00:33:32.722 [2024-05-15 02:01:56.572098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.572406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.572635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.572870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.572992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.573085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.573397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.573632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.573846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.573982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.574132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.574346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.574608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.574903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.574994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.575144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.575381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.575651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.575824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.575955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.576237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.576538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.576831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.576959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.577091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.577227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.577256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.577389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.577490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.577516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.577630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.577785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.577813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.577954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.578269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.578505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.578785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.578934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.579048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.579172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.579196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.579353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.579515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.579541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.579655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.579788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.579816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.579955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.580249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.580512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.580786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.580921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.581069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.581229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.581274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.723 qpair failed and we were unable to recover it. 00:33:32.723 [2024-05-15 02:01:56.581402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.723 [2024-05-15 02:01:56.581506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.581531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.581626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.581747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.581774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.581892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.582150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.582419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.582649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.582794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.582939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.583223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.583476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.583849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.583971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.584067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.584192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.584222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.584351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.584447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.584472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.584612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.584747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.584774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.584916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.585121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.585487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.585718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.585865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.586039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.586330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.586563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.586801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.586960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.587063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.587185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.587212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.587389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.587512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.587537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.587630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.587745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.587770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.587942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.588184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.588497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.588737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.588916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.589070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.589158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.589182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.589317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.589499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.589524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.589654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.589774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.589798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.589945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.590183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.590475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.590758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.590877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.590973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.591256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.591536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.591797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.591967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.724 qpair failed and we were unable to recover it. 00:33:32.724 [2024-05-15 02:01:56.592067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.724 [2024-05-15 02:01:56.592210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.592240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.592339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.592440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.592465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.592611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.592774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.592801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.592931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.593180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.593470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.593758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.593929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.594048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.594169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.594195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.594348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.594478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.594503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.594622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.594761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.594788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.594959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.595300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.595531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.595806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.595964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.596100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.596197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.596240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.596412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.596530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.596555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.596677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.596824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.596848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.596996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.597258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.597474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.597776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.597922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.598062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.598223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.598251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.598396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.598545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.598570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.598674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.598834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.598861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.598976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.599101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.599129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.599245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.599375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.599400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.599557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.599693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.599718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.599862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.600146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.600431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.725 [2024-05-15 02:01:56.600699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.725 [2024-05-15 02:01:56.600832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.725 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.600978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.601311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.601556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.601795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.601917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.602075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.602182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.602209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.602376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.602505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.602532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.602650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.602769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.602794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.602885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.603225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.603498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.603806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.603996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.604125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.604259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.604288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.604434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.604538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:32.726 [2024-05-15 02:01:56.604564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:32.726 qpair failed and we were unable to recover it. 00:33:32.726 [2024-05-15 02:01:56.604667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.604766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.604809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.604950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.605181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.605425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.605667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.605811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.605958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.606207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.606450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.606724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.606841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.005 qpair failed and we were unable to recover it. 00:33:33.005 [2024-05-15 02:01:56.606961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.607070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.005 [2024-05-15 02:01:56.607098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.607271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.607420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.607445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.607565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.607691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.607716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.607826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.607929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.607959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.608090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.608236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.608263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.608418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.608540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.608581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.608709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.608841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.608882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.608973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.609269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.609494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.609720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.609865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.609985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.610269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.610570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.610879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.610996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.611168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.611435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.611731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.611897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.612077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.612178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.612207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.612384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.612532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.612557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.006 [2024-05-15 02:01:56.612682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.612828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.006 [2024-05-15 02:01:56.612852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.006 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.612936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.613242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.613476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.613809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.613936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.614060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.614186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.614210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.614338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.614454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.614479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.614614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.614757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.614782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.614903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.615213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.615524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.615810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.615976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.616117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.616235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.616263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.616385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.616520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.616545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.616677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.616849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.616876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.617044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.617159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.617184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.617336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.617503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.617531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.617666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.617802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.617829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.617947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.618062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.618086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.618228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.618355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.618382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.618542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.618676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.007 [2024-05-15 02:01:56.618703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.007 qpair failed and we were unable to recover it. 00:33:33.007 [2024-05-15 02:01:56.618846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.618943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.618968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.619111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.619242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.619272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.619400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.619558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.619586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.619722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.619815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.619840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.619962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.620254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.620554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.620821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.620979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.621004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.621138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.621272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.621302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.621441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.621590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.621615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.621818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.621974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.622135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.622405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.622651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.622784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.622936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.623212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.623438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.623710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.623852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.624022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.624168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.624209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.624377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.624512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.624544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.008 [2024-05-15 02:01:56.624683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.624841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.008 [2024-05-15 02:01:56.624868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.008 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.624990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.625228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.625457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.625719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.625862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.626064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.626192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.626225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.626363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.626508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.626536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.626657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.626779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.626804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.626970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.627103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.627130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.627263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.627410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.627440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.627563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.627712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.627736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.627877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.628194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.628491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.628758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.628900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.629038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.629284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.629488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.629842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.629998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.630110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.630200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.630232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.630360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.630524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.630551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.630658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.630832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.630857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.630945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.631193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.631444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.631726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.631877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.631983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.632143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.632171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.632319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.632447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.632474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.632608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.632732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.632757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.009 qpair failed and we were unable to recover it. 00:33:33.009 [2024-05-15 02:01:56.632905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.009 [2024-05-15 02:01:56.633033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.633171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.633451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.633733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.633896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.634025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.634161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.634185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.634338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.634485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.634528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.634664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.634794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.634836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.634958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.635264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.635550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.635876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.635999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.636111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.636241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.636267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.636377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.636478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.636518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.636635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.636836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.636863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.636977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.637211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.637572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.637827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.637961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.638086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.638205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.638240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.638415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.638557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.638582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.638700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.638913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.638941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.639112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.639233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.639260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.639405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.639505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.639533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.639737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.639849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.639874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.639998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.640342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.640549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.640842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.640990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.641014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.641110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.641226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.641252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.010 qpair failed and we were unable to recover it. 00:33:33.010 [2024-05-15 02:01:56.641350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.641445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.010 [2024-05-15 02:01:56.641470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.641564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.641684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.641709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.641883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.642086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.642115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.642264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.642384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.642410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.642588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.642701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.642725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.642888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.643200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.643510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.643795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.643948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.644122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.644270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.644314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.644458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.644590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.644617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.644823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.644954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.644981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.645119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.645247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.645273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.645384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.645490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.645517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.645722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.645867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.645907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.646037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.646289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.646543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.646835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.646984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.647203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.647367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.647393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.647518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.647654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.647681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.647830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.647949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.647975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.648118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.648260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.648287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.648437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.648530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.648555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.011 [2024-05-15 02:01:56.648669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.648820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.011 [2024-05-15 02:01:56.648845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.011 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.648963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.649232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.649521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.649779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.649939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.650040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.650242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.650267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.650387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.650513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.650537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.650707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.650807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.650834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.650925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.651264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.651515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.651852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.651980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.652122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.652415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.652653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.652791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.652922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.653175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.653503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.653805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.653949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.654064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.654174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.654202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.654361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.654479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.654504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.654623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.654771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.654796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.654914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.655188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.655538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.655777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.655896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.656043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.656357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.656586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.656787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.656949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.012 qpair failed and we were unable to recover it. 00:33:33.012 [2024-05-15 02:01:56.657083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.012 [2024-05-15 02:01:56.657178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.657333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.657586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.657833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.657947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.658108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.658241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.658268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.658370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.658489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.658514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.658671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.658793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.658834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.658973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.659264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.659512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.659748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.659887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.660044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.660275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.660518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.660818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.660989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.661113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.661255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.661280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.661429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.661566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.661594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.661691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.661799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.661828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.661972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.662238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.662524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.662773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.662894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.663013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.663231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.663502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.663737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.663848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.663994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.664286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.664578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.013 qpair failed and we were unable to recover it. 00:33:33.013 [2024-05-15 02:01:56.664811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.013 [2024-05-15 02:01:56.664936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.665066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.665283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.665569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.665804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.665932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.666026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.666249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.666538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.666817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.666953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.667098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.667309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.667336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.667535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.667632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.667657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.667775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.667898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.667926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.668036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.668335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.668549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.668758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.668931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.669089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.669198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.669229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.669348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.669479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.669506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.669637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.669750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.669778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.669896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.670129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.670427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.670678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.670822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.670972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.671240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.671531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.671789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.671936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.672038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.672183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.672208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.672360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.672472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.672497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.672641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.672765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.672789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.014 qpair failed and we were unable to recover it. 00:33:33.014 [2024-05-15 02:01:56.672908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.014 [2024-05-15 02:01:56.673048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.673233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.673547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.673837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.673972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.674089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.674349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.674581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.674872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.674999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.675113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.675366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.675626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.675895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.675993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.676166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.676477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.676794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.676987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.677076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.677308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.677546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.677824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.677993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.678092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.678183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.678208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.678318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.678482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.678507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.678630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.678779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.678806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.678946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.679198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.679511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.015 qpair failed and we were unable to recover it. 00:33:33.015 [2024-05-15 02:01:56.679775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.015 [2024-05-15 02:01:56.679889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.679914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.680052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.680182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.680210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.680358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.680469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.680497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.680636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.680760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.680784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.680902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.681181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.681461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.681747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.681880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.682010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.682325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.682560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.682849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.682985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.683118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.683224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.683255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.683388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.683558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.683583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.683674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.683777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.683801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.683905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.684187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.684453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.684752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.684874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.684985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.685336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.685545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.685780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.685949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.686050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.686153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.686180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.686314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.686418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.686442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.686553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.686713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.686738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.686859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.687149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.687424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.016 [2024-05-15 02:01:56.687726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.016 [2024-05-15 02:01:56.687862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.016 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.688005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.688239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.688520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.688737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.688862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.688984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.689191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.689411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.689671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.689801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.689937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.690185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.690466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.690748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.690863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.691008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.691264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.691526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.691826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.691968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.692084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.692249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.692275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.692398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.692516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.692543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.692672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.692767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.692792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.692911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.693193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.693435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.693639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.693788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.693904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.694140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.694416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.694707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.694878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.694998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.695112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.695137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.695282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.695423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.695451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.017 qpair failed and we were unable to recover it. 00:33:33.017 [2024-05-15 02:01:56.695601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.017 [2024-05-15 02:01:56.695745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.695770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.695867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.696148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.696447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.696706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.696857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.697006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.697289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.697565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.697861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.697988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.698148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.698474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.698723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.698843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.698943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.699148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.699388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.699713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.699879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.700018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.700245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.700527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.700797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.700922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.701036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.701272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.701550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.701812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.701984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.702100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.702329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.702535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.702814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.702948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.703092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.703208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.018 [2024-05-15 02:01:56.703246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.018 qpair failed and we were unable to recover it. 00:33:33.018 [2024-05-15 02:01:56.703403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.703494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.703518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.703646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.703770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.703795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.703899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.704195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.704506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.704763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.704882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.704978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.705128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.705156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.705263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.705415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.705441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.705568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.705689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.705713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.705871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.706159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.706407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.706650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.706828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.706930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.707200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.707428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.707673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.707820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.707969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.708210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.708502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.708765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.708881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.708988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.709228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.709538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.709799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.709930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.710030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.710312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.710577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.710815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.710960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.019 qpair failed and we were unable to recover it. 00:33:33.019 [2024-05-15 02:01:56.711059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.019 [2024-05-15 02:01:56.711204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.711240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.711387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.711514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.711542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.711709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.711809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.711837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.711981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.712243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.712481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.712774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.712917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.713105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.713197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.713229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.713352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.713507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.713534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.713642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.713731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.713756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.713898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.714169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.714479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.714714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.714877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.714999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.715157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.715184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.715344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.715435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.715461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.715637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.715732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.715757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.715871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.716157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.716418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.716696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.716865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.716988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.717203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.717529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.717773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.717893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.020 [2024-05-15 02:01:56.717991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.718087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.020 [2024-05-15 02:01:56.718111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.020 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.718267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.718409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.718433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.718541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.718652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.718677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.718812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.718905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.718932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.719100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.719357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.719607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.719825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.719964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.720066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.720188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.720213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.720342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.720469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.720494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.720633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.720787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.720815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.720952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.721214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.721527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.721827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.721976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.722069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.722326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.722563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.722806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.722925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.723037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.723374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.723584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.723855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.723998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.724089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.724185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.724210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.724355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.724526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.724554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.724714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.724854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.724879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.724971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.725092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.725117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.725271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.725403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.725430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.725539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.725674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.021 [2024-05-15 02:01:56.725701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.021 qpair failed and we were unable to recover it. 00:33:33.021 [2024-05-15 02:01:56.725841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.725965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.725989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.726106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.726223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.726250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.726368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.726514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.726539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.726685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.726806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.726831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.727008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.727257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.727497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.727763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.727911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.728004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.728138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.728178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.728296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.728394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.728422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.728569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.728693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.728720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.728850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.729231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.729482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.729725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.729872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.730009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.730261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.730494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.730752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.730904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.731019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.731115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.731148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.731285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.731395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.731423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.731549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.731680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.731705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.731912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.732235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.732529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.732816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.732984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.733130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.733408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.733629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.733877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.022 qpair failed and we were unable to recover it. 00:33:33.022 [2024-05-15 02:01:56.734033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.734129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.022 [2024-05-15 02:01:56.734154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.734285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.734405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.734431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.734548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.734683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.734708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.734803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.734922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.734949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.735096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.735360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.735598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.735862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.735983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.736129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.736372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.736659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.736782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.736936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.737187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.737496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.737733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.737855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.737966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.738209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.738541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.738767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.738927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.739039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.739266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.739586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.739883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.739981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.740104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.740371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.740671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.740817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.740933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.741201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.741506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.741755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.023 [2024-05-15 02:01:56.741905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.023 qpair failed and we were unable to recover it. 00:33:33.023 [2024-05-15 02:01:56.742056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.742331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.742587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.742802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.742982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.743098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.743184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.743208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.743363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.743501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.743525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.743618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.743741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.743765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.743911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.744141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.744477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.744809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.744962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.745079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.745176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.745206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.745342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.745467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.745493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.745614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.745734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.745760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.745916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.746198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.746523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.746789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.746927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.747018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.747258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.747553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.747812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.747969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.748094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.748213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.748247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.748364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.748480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.748509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.748656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.748790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.748818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.024 qpair failed and we were unable to recover it. 00:33:33.024 [2024-05-15 02:01:56.748982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.024 [2024-05-15 02:01:56.749071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.749250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.749539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.749840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.749968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.750082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.750204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.750256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.750388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.750564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.750590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.750678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.750813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.750839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.750968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.751270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.751535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.751763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.751898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.752014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.752330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.752552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.752838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.752971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.753116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.753440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.753735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.753885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.754048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.754297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.754569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.754855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.754976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.755134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.755443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.755699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.755834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.755925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.756200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.756458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.756734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.756898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.757039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.757160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.025 [2024-05-15 02:01:56.757185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.025 qpair failed and we were unable to recover it. 00:33:33.025 [2024-05-15 02:01:56.757290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.757433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.757459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.757580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.757687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.757730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.757820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.757940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.757983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.758125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.758244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.758271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.758417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.758564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.758590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.758682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.758822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.758851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.758978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.759267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.759575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.759892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.759993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.760125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.760436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.760683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.760804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.760893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.761235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.761508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.761767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.761968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.762063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.762155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.762182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.762329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.762415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.762441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.762564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.762731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.762760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.762909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.763176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.763450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.763764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.763942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.764041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.764298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.764597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.764861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.764981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.765011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.765178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.765351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.765378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.026 [2024-05-15 02:01:56.765513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.765658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.026 [2024-05-15 02:01:56.765686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.026 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.765825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.765979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.766188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.766433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.766731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.766862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.767006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.767144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.767173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.767319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.767460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.767489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.767625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.767759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.767785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.767918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.768163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.768433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.768732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.768889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.769027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.769166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.769192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.769346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.769467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.769493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.769652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.769775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.769803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.769981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.770227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.770494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.770768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.770952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.771062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.771220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.771247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.771360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.771466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.771496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.771634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.771729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.771758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.771879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.772196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.772548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.772802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.772918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.773044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.773286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.773547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.773842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.773999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.027 qpair failed and we were unable to recover it. 00:33:33.027 [2024-05-15 02:01:56.774147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.027 [2024-05-15 02:01:56.774252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.774279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.774427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.774626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.774652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.774799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.774972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.774998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.775091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.775193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.775225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.775361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.775462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.775489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.775609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.775766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.775795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.775935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.776206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.776513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.776759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.776913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.777042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.777167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.777193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.777320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.777490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.777519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.777648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.777757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.777787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.777958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.778102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.778145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.778270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.778474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.778500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.778613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.778711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.778738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.778887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.779196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.779469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.779777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.779916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.780057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.780236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.780262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.780409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.780598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.780624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.780825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.780954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.780983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.781123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.781262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.781292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.781462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.781621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.781650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.781785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.781880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.781908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.782052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.782189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.782241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.782366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.782490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.782516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.782643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.782760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.782786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.782909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.783033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.028 [2024-05-15 02:01:56.783059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.028 qpair failed and we were unable to recover it. 00:33:33.028 [2024-05-15 02:01:56.783164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.783310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.783337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.783460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.783560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.783588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.783731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.783842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.783872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.784030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.784162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.784191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.784352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.784478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.784504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.784620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.784727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.784757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.784896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.785246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.785511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.785739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.785857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.785978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.786330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.786588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.786881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.786977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.787155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.787426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.787731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.787894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.788026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.788145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.788172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.788346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.788470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.788512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.788660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.788784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.788811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.788961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.789202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.789459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.789741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.789882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.790043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.790204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.790242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.029 qpair failed and we were unable to recover it. 00:33:33.029 [2024-05-15 02:01:56.790386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.790511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.029 [2024-05-15 02:01:56.790537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.790681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.790838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.790867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.791002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.791319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.791539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.791852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.791977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.792131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.792484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.792773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.792943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.793095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.793226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.793270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.793432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.793565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.793596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.793756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.793867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.793896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.794065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.794263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.794290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.794414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.794552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.794582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.794717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.794852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.794881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.794993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.795261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.795549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.795856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.795994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.796176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.796567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.796847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.796992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.797133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.797272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.797301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.797437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.797592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.797621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.797825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.797999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.798028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.798187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.798324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.798355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.798518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.798620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.798651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.798819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.798971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.799014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.799175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.799287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.799317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.799451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.799580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.030 [2024-05-15 02:01:56.799609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.030 qpair failed and we were unable to recover it. 00:33:33.030 [2024-05-15 02:01:56.799777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.799897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.799923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.800087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.800211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.800244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.800366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.800488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.800514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.800609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.800721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.800747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.800872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.801200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.801541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.801812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.801958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.802126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.802264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.802294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.802433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.802554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.802580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.802665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.802781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.802807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.802931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.803057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.803084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.803233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.803404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.803433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.803565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.803736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.803762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.803883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.804167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.804446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.804795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.804928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.805072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.805222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.805265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.805425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.805538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.805569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.805686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.805821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.805852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.805992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.806116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.806142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.806288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.806451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.806480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.806638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.806861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.806901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.807099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.807196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.807228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.807450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.807588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.807642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.807778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.807909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.807939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.808081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.808204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.808270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.808401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.808539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.808568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.031 qpair failed and we were unable to recover it. 00:33:33.031 [2024-05-15 02:01:56.808736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.808865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.031 [2024-05-15 02:01:56.808896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.809034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.809179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.809205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.809395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.809515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.809542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.809678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.809814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.809843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.809966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.810264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.810556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.810840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.810986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.811134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.811265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.811295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.811463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.811579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.811606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.811726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.811860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.811886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.812007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.812338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.812610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.812878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.812993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.813160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.813461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.813742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.813935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.814074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.814363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.814587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.814825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.814997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.815125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.815242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.815269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.815376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.815513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.815555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.815679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.815815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.815844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.815994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.816267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.816549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.816850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.816977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.817104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.817226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.817253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.032 qpair failed and we were unable to recover it. 00:33:33.032 [2024-05-15 02:01:56.817355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.032 [2024-05-15 02:01:56.817499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.817530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.817705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.817855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.817881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.818023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.818197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.818230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.818355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.818484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.818513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.818685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.818807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.818833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.818961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.819248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.819590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.819825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.819970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.820109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.820246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.820275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.820391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.820517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.820543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.820694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.820854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.820883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.821015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.821298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.821547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.821866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.821999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.822181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.822444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.822780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.822924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.823094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.823244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.823271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.823363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.823504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.823547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.823672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.823785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.823815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.823931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.824244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.824519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.033 qpair failed and we were unable to recover it. 00:33:33.033 [2024-05-15 02:01:56.824742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.033 [2024-05-15 02:01:56.824884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.825022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.825205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.825247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.825402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.825572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.825602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.825744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.825864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.825895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.826075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.826344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.826607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.826818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.826962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.827112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.827255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.827285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.827403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.827495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.827521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.827662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.827759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.827789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.827899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.828226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.828500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.828824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.828996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.829111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.829246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.829273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.829450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.829577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.829603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.829739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.829864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.829891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.830044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.830164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.830190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.830394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.830514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.830542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.830637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.830756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.830782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.830911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.831416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.831777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.831966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.832077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.832211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.832246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.832403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.832548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.832576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.832716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.832841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.832867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.832994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.833130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.833158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.833279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.833422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.833449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.034 qpair failed and we were unable to recover it. 00:33:33.034 [2024-05-15 02:01:56.833569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.034 [2024-05-15 02:01:56.833692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.833719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.833858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.834176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.834541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.834809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.834983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.835155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.835309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.835336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.835457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.835557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.835584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.835718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.835873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.835902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.836041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.836175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.836204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.836323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.836467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.836493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.836669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.836817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.836858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.837048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.837205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.837241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.837431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.837574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.837617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.837760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.837887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.837913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.838077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.838225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.838252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.838345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.838491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.838517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.838666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.838803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.838832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.838971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.839265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.839484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.839736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.839926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.840095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.840354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.840597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.840872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.840993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.841123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.841426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.841738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.841916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.842075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.842253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.842280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.035 [2024-05-15 02:01:56.842407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.842572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.035 [2024-05-15 02:01:56.842602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.035 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.842745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.842849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.842876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.843004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.843139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.843168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.843301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.843411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.843439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.843564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.843709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.843735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.843911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.844046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.844075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.844238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.844402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.844428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.844574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.844721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.844763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.844893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.845212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.845546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.845833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.845989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.846137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.846234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.846262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.846418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.846542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.846569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.846682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.846779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.846805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.846929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.847273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.847570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.847823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.847987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.848112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.848237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.848264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.848368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.848457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.848483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.848605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.848723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.848753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.848887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.849032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.849058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.849174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.849377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.849405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.849553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.849724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.849754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.849919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.850256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.850531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.850827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.850974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.851096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.851274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.036 [2024-05-15 02:01:56.851301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.036 qpair failed and we were unable to recover it. 00:33:33.036 [2024-05-15 02:01:56.851425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.851568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.851595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.851718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.851839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.851865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.851971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.852266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.852563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.852787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.852950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.853063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.853221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.853248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.853371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.853493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.853519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.853659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.853823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.853849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.853983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.854288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.854577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.854845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.854972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.855147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.855411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.855722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.855891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.856059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.856187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.856244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.856423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.856579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.856605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.856699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.856798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.856825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.856949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.857238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.857522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.857823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.857978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.858178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.858486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.858780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.858974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.859107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.859210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.859247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.859403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.859493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.859519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.859651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.859777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.859803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.037 [2024-05-15 02:01:56.859910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.860036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.037 [2024-05-15 02:01:56.860062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.037 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.860186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.860343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.860373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.860500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.860643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.860669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.860806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.860940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.860969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.861145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.861255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.861283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.861399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.861493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.861520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.861640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.861794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.861824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.861934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.862211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.862536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.862858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.862991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.863165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.863411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.863653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.863766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.863914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.864211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.864525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.864853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.864979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.865152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.865316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.865346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.865455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.865586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.865616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.865759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.865882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.865908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.866026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.866315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.866562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.866765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.866909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.867034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.867141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.038 [2024-05-15 02:01:56.867171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.038 qpair failed and we were unable to recover it. 00:33:33.038 [2024-05-15 02:01:56.867327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.867449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.867477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.867653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.867781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.867810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.867969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.868133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.868163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.868325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.868426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.868454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.868591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.868731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.868758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.868906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.869174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.869553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.869816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.869955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.870068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.870214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.870248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.870390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.870523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.870554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.870718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.870822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.870852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.870961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.871187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.871547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.871842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.871952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.872051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.872151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.872178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.872326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.872485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.872514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.872686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.872828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.872869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.873006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.873164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.873193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.873337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.873462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.873491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.873603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.873730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.873755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.873900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.874195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.874475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.874743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.874886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.875056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.875151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.875178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.875310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.875396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.875422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.039 [2024-05-15 02:01:56.875542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.875690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.039 [2024-05-15 02:01:56.875716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.039 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.875831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.875938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.875968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.876086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.876211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.876244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.876366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.876485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.876514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.876648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.876783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.876812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.876960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.877281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.877583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.877877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.877998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.878117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.878407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.878735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.878865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.878985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.879231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.879616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.879858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.879972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.880098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.880263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.880293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.880460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.880589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.880616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.880738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.880823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.880849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.880938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.881252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.881550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.881812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.881973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.882128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.882369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.882662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.882833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.882992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.883252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.883518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.883785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.883931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.040 qpair failed and we were unable to recover it. 00:33:33.040 [2024-05-15 02:01:56.884016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.040 [2024-05-15 02:01:56.884133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.884159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.884331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.884484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.884510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.884656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.884780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.884806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.884924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.885243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.885504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.885771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.885940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.886085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.886201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.886235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.886356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.886440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.886466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.886590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.886754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.886784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.886943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.887272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.887611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.887856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.887960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.888122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.888372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.888611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.888807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.888952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.889271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.889507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.889833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.889979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.890174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.890500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.890815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.890970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.891118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.891250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.891281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.891397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.891501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.891530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.891647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.891794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.891820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.891960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.892069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.892099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.892263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.892394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.892424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.041 [2024-05-15 02:01:56.892592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.892693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.041 [2024-05-15 02:01:56.892719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.041 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.892838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.892958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.892984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.893075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.893208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.893265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.893438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.893536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.893563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.893686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.893785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.893812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.893935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.894239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.894564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.894834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.894959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.895112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.895238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.895265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.895385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.895507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.895550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.895709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.895878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.895905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.896000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.896141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.896167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.896305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.896466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.896495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.896620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.896752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.896782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.896921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.897161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.897418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.897634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.897781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.897943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.898099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.898128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.898289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.898428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.898454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.898569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.898717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.898743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.898908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.899230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.899528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.899774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.042 [2024-05-15 02:01:56.899919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.042 qpair failed and we were unable to recover it. 00:33:33.042 [2024-05-15 02:01:56.900086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.900245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.900274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.900397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.900525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.900552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.900700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.900796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.900823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.900941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.901234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.901475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.901761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.901900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.902034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.902129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.902155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.902310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.902445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.902486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.902649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.902745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.902771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 21776 Killed "${NVMF_APP[@]}" "$@" 00:33:33.043 [2024-05-15 02:01:56.902903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.903068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.903112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:33.043 [2024-05-15 02:01:56.903276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.903438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:33.043 [2024-05-15 02:01:56.903467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.903572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:33.043 [2024-05-15 02:01:56.903690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.903717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:33.043 [2024-05-15 02:01:56.903865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 [2024-05-15 02:01:56.904008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.904188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.904499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.904769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.904918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.905087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.905225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.905264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.905428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.905560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.905588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.905742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.905873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.905915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.906055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.906188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.906224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.906371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.906486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.906512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.906610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.906731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.906757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.906904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.907183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 [2024-05-15 02:01:56.907509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=22248 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:33.043 [2024-05-15 02:01:56.907778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 22248 00:33:33.043 [2024-05-15 02:01:56.907903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.043 [2024-05-15 02:01:56.907933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.043 qpair failed and we were unable to recover it. 00:33:33.044 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 22248 ']' 00:33:33.044 [2024-05-15 02:01:56.908068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.908198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.044 [2024-05-15 02:01:56.908236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.908357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:33.044 [2024-05-15 02:01:56.908460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.908487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.044 [2024-05-15 02:01:56.908574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:33.044 [2024-05-15 02:01:56.908720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 02:01:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.044 [2024-05-15 02:01:56.908747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.908889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.909168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.909504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.909818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.909974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.910116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.910393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.910717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.910877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.911029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.911288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.911570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.911880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.911998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.912169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.912457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.912738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.912859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.912985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.913250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.913515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.913773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.913949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.914105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.914205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.914250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.914376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.914461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.914487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.914631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.914749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.914775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.914885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.915007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.915033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.044 [2024-05-15 02:01:56.915140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.915264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.044 [2024-05-15 02:01:56.915291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.044 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.915410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.915553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.915582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.915713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.915850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.915879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.916022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.916257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.916537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.916855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.916990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.917117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.917231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.917273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.917383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.917523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.917555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.917685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.917787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.917815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.917904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.918153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.918378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.918625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.918792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.918939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.919181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.919486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.919804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.919954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.920075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.920313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.920598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.920847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.920971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.921071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.921207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.921244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.921374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.921481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.921509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.921637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.921757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.921783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.327 qpair failed and we were unable to recover it. 00:33:33.327 [2024-05-15 02:01:56.921894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.327 [2024-05-15 02:01:56.922006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.922183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.922469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.922700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.922846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.922979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.923231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.923536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.923838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.923994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.924120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.924241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.924268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.924416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.924515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.924557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.924660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.924765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.924809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.924941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.925193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.925470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.925790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.925941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.926035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.926135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.926179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.926344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.926452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.926478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.926599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.926721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.926749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.926896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.927198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.927453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.927699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.927890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.927992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.928209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.928444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.928694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.928848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.928990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.929248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.929463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.328 qpair failed and we were unable to recover it. 00:33:33.328 [2024-05-15 02:01:56.929735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.328 [2024-05-15 02:01:56.929859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.929885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.930016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.930300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.930581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.930796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.930938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.931029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.931270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.931496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.931804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.931945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.932048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.932269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.932504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.932775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.932929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.933076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.933356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.933594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.933845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.933987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.934085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.934337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.934596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.934866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.934997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.935151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.935407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.935667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.935814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.935920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.936166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.936412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.936660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.936811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.936958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.937054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.937080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.329 [2024-05-15 02:01:56.937207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.937311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.329 [2024-05-15 02:01:56.937338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.329 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.937452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.937551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.937577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.937682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.937778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.937804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.937925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.938180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.938431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.938700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.938843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.938962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.939244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.939514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.939723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.939841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.939941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.940186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.940434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.940716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.940847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.940966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.941259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.941526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.941798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.941970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.942065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.942341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.942604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.942822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.942968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.943104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.943367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.943582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.943804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.943976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.944114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.944359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.944580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.330 qpair failed and we were unable to recover it. 00:33:33.330 [2024-05-15 02:01:56.944822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.330 [2024-05-15 02:01:56.944945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.944970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.945096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.945196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.945229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.945334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.945437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.945463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.946289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.946444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.946471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.946589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.946703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.946729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.946848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.947251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.947280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.947437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.947581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.947608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.947706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.947817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.947843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.947977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.948271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.948549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.948835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.948968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.949096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.949227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.949254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.949380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.949504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.949530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.949677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.949779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.949815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.949942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.950191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.950436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.950685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.950831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.950979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.951286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.951587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.951851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.951977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.952083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.952198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.952233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.331 qpair failed and we were unable to recover it. 00:33:33.331 [2024-05-15 02:01:56.952361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.952486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.331 [2024-05-15 02:01:56.952512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.952636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.952765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.952791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.952890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.953160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.953426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.953708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.953860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.953954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.954228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.954476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.954728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.954853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.954979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.955365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.955597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.955846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.955998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.956132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.956353] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:33.332 [2024-05-15 02:01:56.956383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956423] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.332 [2024-05-15 02:01:56.956483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.956652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.956894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.956981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.957146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.957424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.957660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.957816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.957913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.958178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.958441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.958750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.958902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.959025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.959269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.959511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.959763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.959940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.332 qpair failed and we were unable to recover it. 00:33:33.332 [2024-05-15 02:01:56.960034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.332 [2024-05-15 02:01:56.960145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.960314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.960617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.960837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.960980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.961105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.961347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.961590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.961837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.961965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.962083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.962330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.962581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.962853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.962982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.963100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.963337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.963564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.963786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.963938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.964035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.964266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.964486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.964782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.964960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.965080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.965365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.965646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.965894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.965992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.966114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.966350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.966578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.966866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.966985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.967107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.967210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.967253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.967349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.967444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.333 [2024-05-15 02:01:56.967469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.333 qpair failed and we were unable to recover it. 00:33:33.333 [2024-05-15 02:01:56.967598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.967691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.967716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.967834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.967955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.967981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.968111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.968223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.968252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.968348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.968449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.968475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.968588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.969311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.969342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.969474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.969592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.969618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.969744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.970186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.970224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.970327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.970417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.970443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.970545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.970670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.970696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.970815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.971582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.971613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.971765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.971865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.971894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.972030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.972287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.972544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.972819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.972966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.973085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.973407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.973648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.973861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.973993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.974141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.974431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.974691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.974847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.975003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.975295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.975531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.975783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.975940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.976040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.976271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.976556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.334 qpair failed and we were unable to recover it. 00:33:33.334 [2024-05-15 02:01:56.976802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.334 [2024-05-15 02:01:56.976954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.977056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.977198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.977243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.977939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.978227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.978481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.978742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.978879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.978988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.979294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.979532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.979823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.979972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.980063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.980256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.980285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.980919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.981200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.981462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.981737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.981878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.982032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.982265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.982483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.982731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.982847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.982944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.983185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.983474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.983725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.983850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.983974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.984236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.984460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.984710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.984858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.984961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.985203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.985465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.335 qpair failed and we were unable to recover it. 00:33:33.335 [2024-05-15 02:01:56.985737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.335 [2024-05-15 02:01:56.985840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.985866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.985970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.986210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.986436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.986680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.986832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.986957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.987186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.987422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.987663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.987803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.987904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.988146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.988415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.988648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.988793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.988905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.989154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.989396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.989646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.989774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.989929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.990182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.990438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.990689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.990815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.990919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.991014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.991040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.991169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.991319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.991346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.991446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.992236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.992266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.992384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.992488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.992530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.336 [2024-05-15 02:01:56.992630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.992759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.336 [2024-05-15 02:01:56.992786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.336 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.992886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.992981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.993105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.993367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.993617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.993782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.993883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.994132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.994399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.994643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.994787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.994889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.995052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.995093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.995196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.995329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.995355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.995456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.995554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.995580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.995682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.996402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.996431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.996558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.996682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.996708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.996819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.996913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.996939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.997072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.997369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.997622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.997851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.997993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.998088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.998205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.998243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.998376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.998479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.998505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.998656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.998758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.998784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.998915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.999151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.999413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.999676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:56.999804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:56.999921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.337 [2024-05-15 02:01:57.000046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:57.000180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:57.000474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:57.000742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.337 [2024-05-15 02:01:57.000887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.337 qpair failed and we were unable to recover it. 00:33:33.337 [2024-05-15 02:01:57.001042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.001300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.001543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.001806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.001920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.002014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.002264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.002530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.002805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.002982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.003119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.003244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.003276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.003380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.003489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.003527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.003631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.003759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.003785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.003914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.004202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.004485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.004750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.004900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.005009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.005278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.005538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.005798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.005943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.006058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.006317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.006591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.006861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.006989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.007110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.007330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.007578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.007829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.007971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.008114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.008210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.008243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.008378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.008470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.008496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.338 [2024-05-15 02:01:57.008634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.008749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.338 [2024-05-15 02:01:57.008774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.338 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.008905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.009197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.009501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.009800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.009944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.010062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.010298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.010579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.010807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.010930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.011076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.011197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.011230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.011390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.011481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.011516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.011624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.011753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.011779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.011883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.012197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.012446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.012844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.012981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.013252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.013503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.013727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.013878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.013978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.014226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.014499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.014777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.014921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.015040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.015331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.015592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.015851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.015986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.016012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.016165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.016315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.016342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.016469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.016594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.016620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.339 qpair failed and we were unable to recover it. 00:33:33.339 [2024-05-15 02:01:57.016753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.339 [2024-05-15 02:01:57.016881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.016907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.017036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.017344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.017596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.017822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.017954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.018065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.018192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.018226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.018355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.018462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.018487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.018620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.018752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.018778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.018902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.019223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.019519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.019811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.019950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.020071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.020332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.020596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.020817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.020938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.021098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.021241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.021268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.021392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.021493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.021530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.021636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.021765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.021791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.021923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.022201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.022460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.022698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.022854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.022953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.023046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.023072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.023190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.023337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.023364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.023512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.023638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.340 [2024-05-15 02:01:57.023664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.340 qpair failed and we were unable to recover it. 00:33:33.340 [2024-05-15 02:01:57.023778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.023899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.023925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.024042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.024310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.024573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.024836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.024966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.025101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.025247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.025281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.025404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.025527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.025567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.025715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.025838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.025863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.025984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.026240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.026506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.026726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.026870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.027021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.027303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.027561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.027793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.027915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.028006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.028281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.028517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.028773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.028922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.029066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.029191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.029241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.029367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.029523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.029549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.029663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.029774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.029800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.029908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.030151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.030419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.030678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.030824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.030947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.031053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.031078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.031226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.031356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.031381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.341 qpair failed and we were unable to recover it. 00:33:33.341 [2024-05-15 02:01:57.031496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.031600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.341 [2024-05-15 02:01:57.031626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.031732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.031848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.031874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.031978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.032194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.032486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.032757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.032873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.032980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.033293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.033544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.033850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.033995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.034091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.034350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.034592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.034888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.034981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.035137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.035383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.035645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.035803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.035922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.036232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.036490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.036784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.036954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.037053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.037281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.037543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.037823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.037971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.038125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.038242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.038279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.038379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.038475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.038511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.038639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.038752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.038778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.038921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.039043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.039073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.039175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.039319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.039346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.342 qpair failed and we were unable to recover it. 00:33:33.342 [2024-05-15 02:01:57.039467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.342 [2024-05-15 02:01:57.039612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.039638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.039761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.039876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.039902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.040022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.040253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.040528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.040780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.040911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.041011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.041253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.041480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.041817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.041965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.042125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.042227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.042253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.042352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.042474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.042511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.042639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.042766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.042792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.042894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.043153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.043411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.043671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.043791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.043906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.044179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.044488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.044750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.044899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.044982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.045233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.045548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.045794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.045908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.046034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.046269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.046512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.046759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.046880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.046985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.047103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.343 [2024-05-15 02:01:57.047128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.343 qpair failed and we were unable to recover it. 00:33:33.343 [2024-05-15 02:01:57.047221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.047370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.047395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.047499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.047643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.047668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.047792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.047909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.047934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.048032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.048285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.048589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.048863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.048984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.049106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.049401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.049733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.049862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.049978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.050247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.050488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.050775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.050947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.051075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.051190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.051222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.051384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.051505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.051531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.051642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.051764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.051791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.051899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.052244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.052553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.052823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.052974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.053073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.053200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.053234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.053363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.053489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.053525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.053644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.053762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.053788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.053897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.054161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.054429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.054668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.054795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.054917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.055017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.055043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.344 qpair failed and we were unable to recover it. 00:33:33.344 [2024-05-15 02:01:57.055166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.055260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.344 [2024-05-15 02:01:57.055286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.055420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.055521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.055547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.055668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.055756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.055782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.055874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.055998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.056147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.056377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.056643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.056753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.056898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.057153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.057387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.057608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.057754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.057863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.058160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.058408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.058687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.058814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.058924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.059149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.059480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.059773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.059924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.060053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.060165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.060191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.060358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.060477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.060502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.060608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.060760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.060786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.060889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.061004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.061030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.061160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.061322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.061348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.061477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.061610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.345 [2024-05-15 02:01:57.061636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.345 qpair failed and we were unable to recover it. 00:33:33.345 [2024-05-15 02:01:57.061732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.061862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.061888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.062016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.062266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.062507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.062823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.062948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.063065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.063151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.063176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.063304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.063383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:33.346 [2024-05-15 02:01:57.063463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.063489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.063628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.063755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.063781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.063895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.064199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.064481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.064706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.064845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.064948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.065190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.065497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.065767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.065918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.066033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.066155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.066180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.066327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.066442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.066467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.066586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.066732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.066758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.066883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.067166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.067421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.067691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.067844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.068000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.068278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.068543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.068798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.068945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.069033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.069147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.069172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.069298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.069424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.069450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.346 [2024-05-15 02:01:57.069556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.069674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.346 [2024-05-15 02:01:57.069701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.346 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.069834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.069958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.069985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.070110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.070245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.070277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.070400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.070495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.070530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.070649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.070784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.070810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.070935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.071206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.071477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.071741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.071888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.072018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.072289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.072572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.072825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.072978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.073067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.073156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.073182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.073333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.073480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.073513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.073598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.073741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.073767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.073893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.074135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.074412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.074681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.074865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.075013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.075332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.075561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.075763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.075933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.076059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.076211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.076274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.076426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.076540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.076566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.076683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.076805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.076831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.076961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.077233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.077525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.077782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.077929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.078051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.078173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.078199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.078324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.078449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.347 [2024-05-15 02:01:57.078475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.347 qpair failed and we were unable to recover it. 00:33:33.347 [2024-05-15 02:01:57.078626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.078724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.078750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.078878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.078996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.079150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.079388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.079654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.079802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.079951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.080167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.080400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.080661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.080808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.080940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.081211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.081442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.081712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.081854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.081958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.082176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.082441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.082742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.082887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.082984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.083238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.083509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.083797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.083968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.084068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.084168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.084194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.084366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.084519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.084547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.084680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.084805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.084831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.084935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.085268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.085551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.085839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.085966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.086083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.086205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.086241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.086398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.086544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.086571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.086673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.086820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.086847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.087000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.087135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.087162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.348 qpair failed and we were unable to recover it. 00:33:33.348 [2024-05-15 02:01:57.087281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.348 [2024-05-15 02:01:57.087435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.087462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.087563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.087691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.087719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.087818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.087938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.087966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.088064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.088316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.088589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.088801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.088930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.089055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.089200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.089234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.089333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.089458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.089484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.089602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.089731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.089757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.089886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.090166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.090392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.090694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.090848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.090975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.091227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.091498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.091777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.091928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.092027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.092342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.092581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.092813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.092932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.093059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.093204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.093241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.093364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.093496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.093522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.349 qpair failed and we were unable to recover it. 00:33:33.349 [2024-05-15 02:01:57.093618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.093745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.349 [2024-05-15 02:01:57.093771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.093891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.094153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.094433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.094669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.094792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.094907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.095153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.095404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.095690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.095837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.095956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.096264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.096479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.096779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.096906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.097024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.097326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.097615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.097866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.097979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.098106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.098229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.098266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.098385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.098539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.098565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.098658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.098765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.098791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.098911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.099221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.099495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.350 [2024-05-15 02:01:57.099767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.350 [2024-05-15 02:01:57.099909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.350 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.099999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.100233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.100452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.100712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.100863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.100955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.101212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.101470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.101766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.101890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.102040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.102270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.102482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.102763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.102917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.103017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.103294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.103570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.103867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.103988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.104114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.104357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.104590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.104846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.104996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.105139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.105262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.105289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.105410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.105531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.105558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.105653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.105774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.105801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.105949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.106073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.106101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.351 qpair failed and we were unable to recover it. 00:33:33.351 [2024-05-15 02:01:57.106229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.351 [2024-05-15 02:01:57.106329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.106357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.106472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.106588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.106614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.106744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.106866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.106893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.107025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.107262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.107521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.107753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.107904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.108004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.108251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.108503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.108720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.108841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.108967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.109212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.109481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.109747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.109898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.110034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.110317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.110571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.110817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.110931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.111024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.111267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.111522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.111801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.111920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.112030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.112155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.112182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.112316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.112419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.112445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.352 [2024-05-15 02:01:57.112554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.112678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.352 [2024-05-15 02:01:57.112705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.352 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.112856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.112984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.113135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.113402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.113734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.113877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.113977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.114122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.114149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.114298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.114446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.114472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.114591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.114712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.114743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.114884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.115137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.115422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.115684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.115901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.115993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.116115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.116394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.116643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.116791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.116906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.117207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.117476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.117726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.117879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.117972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.118227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.118482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.118713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.118862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.118963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.119047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.119073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.353 qpair failed and we were unable to recover it. 00:33:33.353 [2024-05-15 02:01:57.119200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.353 [2024-05-15 02:01:57.119322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.119348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.119480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.119637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.119663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.119814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.119914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.119942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.120039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.120337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.120614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.120878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.120997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.121135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.121416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.121678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.121831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.121959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.122183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.122455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.122753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.122892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.123044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.123313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.123576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.123847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.123993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.124098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.124194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.124272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.124433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.124546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.124571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.124665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.124812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.124838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-05-15 02:01:57.124958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.354 [2024-05-15 02:01:57.125076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.125228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.125546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.125775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.125903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.126003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.126246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.126495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.126722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.126839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.126965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.127266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.127543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.127796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.127934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.128081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.128369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.128630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.128903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.128998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.129171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.129439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.129708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.129871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.129963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.130212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.130473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.130691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.130831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.130939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.131164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.131394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.131701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.131877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.132000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.132095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.355 [2024-05-15 02:01:57.132121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-05-15 02:01:57.132262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.132385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.132411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.132518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.132643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.132670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.132774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.132871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.132906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.133019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.133292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.133583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.133801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.133944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.134069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.134324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.134585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.134847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.134973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.135092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.135188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.135223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.135360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.135485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.135511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.135640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.135763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.135790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.135915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.136160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.136411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.136662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.136807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.136937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.137184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.137437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.137680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.137828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.137925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.138142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.138441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.138710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.138851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.138970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.139095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.356 [2024-05-15 02:01:57.139122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-05-15 02:01:57.139250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.139375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.139402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.139487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.139598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.139625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.139751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.139842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.139870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.139971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.140261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.140512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.140764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.140893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.140986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.141237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.141493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.141737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.141892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.142019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.142325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.142573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.142794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.142954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.143083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.143359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.143643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.143888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.143980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.144125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.144363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.144670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.144794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.144913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.145221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.145533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.145836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.145961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.146105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.146227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.146266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.146425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.146534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.146561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.146663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.146787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.146814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.146965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.147090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.147116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.147259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.147353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.147379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.357 qpair failed and we were unable to recover it. 00:33:33.357 [2024-05-15 02:01:57.147475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.357 [2024-05-15 02:01:57.147605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.147632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.147791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.147913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.147940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.148043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.148166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.148192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.148365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.148510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.148542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.148687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.148792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.148819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.148942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.149213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.149491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.149748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.149938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.150045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.150176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.150202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.150320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.150469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.150495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.150589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.150736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.150762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.150883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.151168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.151464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.151736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.151899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.152030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.152292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.152564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.152833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.152947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.153061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.153292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.153588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.153861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.153987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.154150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.154373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.154598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.154775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.154901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.155158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.155430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.155739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.155891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.156018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.156267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.156585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.156850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.156996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.157023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.157115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.157245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.157273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.157398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.157509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.358 [2024-05-15 02:01:57.157536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.358 qpair failed and we were unable to recover it. 00:33:33.358 [2024-05-15 02:01:57.157682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.157809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.157837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.157941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.158227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.158556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.158795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.158974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.159079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.159354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.159602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.159874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.159981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.160138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.160386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.160665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.160807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.160964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.161244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.161509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.161776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.161950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.162046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.162331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.162610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.162837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.162983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.163107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.163360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.163574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.163831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.163953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.164058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.164325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.164550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.164783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.164902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.164996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.165235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.165461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.165631] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.359 [2024-05-15 02:01:57.165666] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.359 [2024-05-15 02:01:57.165675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.359 [2024-05-15 02:01:57.165701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.359 [2024-05-15 02:01:57.165712] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.359 [2024-05-15 02:01:57.165793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:33.359 [2024-05-15 02:01:57.165819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.165815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:33.359 [2024-05-15 02:01:57.165817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:33.359 [2024-05-15 02:01:57.165922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.165788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:33.359 [2024-05-15 02:01:57.166023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.166048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.359 qpair failed and we were unable to recover it. 00:33:33.359 [2024-05-15 02:01:57.166197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.359 [2024-05-15 02:01:57.166311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.166339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.166441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.166532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.166559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.166662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.166760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.166786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.166914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.167165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.167400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.167679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.167808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.167914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.168133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.168374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.168597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.168863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.168987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.169100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.169315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.169546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.169759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.169912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.170045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.170272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.170499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.170725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.170872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.170966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.171181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.171426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.171746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.171864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.171968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.172245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.172485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.172737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.172851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.172957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.173234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.173462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.173685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.173814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.173916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.174151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.174414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.174617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.174848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.174976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f211c000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.175104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.175212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.175254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.175374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.175481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.175511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.360 qpair failed and we were unable to recover it. 00:33:33.360 [2024-05-15 02:01:57.175637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.360 [2024-05-15 02:01:57.175734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.175762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.175860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.175981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.176115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.176367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.176582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.176839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.176986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.177081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.177332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.177573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.177817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.177964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.178062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.178316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.178527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.178767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.178916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.179040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.179312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.179527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.179751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.179884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.179977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.180222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.180447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.180677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.180810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.180959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.181182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.181430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.181651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.181775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.181907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.182138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.182403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.182637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.182790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.182916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.183140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.183385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.183643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.183875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.183995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.184121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.184350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.184578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.184830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.184955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.185048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.185306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.185523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.361 qpair failed and we were unable to recover it. 00:33:33.361 [2024-05-15 02:01:57.185764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.361 [2024-05-15 02:01:57.185859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.185885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.185976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.186188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.186440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.186694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.186899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.186987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.187114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.187351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.187627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.187849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.187968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.188055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.188299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.188538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.362 [2024-05-15 02:01:57.188784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.362 [2024-05-15 02:01:57.188910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.362 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.189035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.189257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.189492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.189721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.189843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.189963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.190180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.190413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.190640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.190870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.190988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.191093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.191367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.191616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.191864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.191991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.192088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.192349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.192585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.192821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.192942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.193038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.193292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.193561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.193792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.193945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.194048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.194291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.194607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.194836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.194955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.195050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.195290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.195510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.195732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.195874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.195978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.196208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.196478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.196715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.196871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.196972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.197209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.197449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.197674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.197798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.197904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.198184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.198441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.198694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.198821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.363 qpair failed and we were unable to recover it. 00:33:33.363 [2024-05-15 02:01:57.198922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.199022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.363 [2024-05-15 02:01:57.199048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.199154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.199393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.199604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.199854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.199973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.200071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.200311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.200576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.200812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.200973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.201066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.201301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.201562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.201775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.201922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.202037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.202271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.202478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.202705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.202850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.202956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.203207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.203429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.203653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.203779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.203905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.204157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.204384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.204638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.204857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.204975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.205069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.205288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.205557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.205771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.205925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.206022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.206246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.206467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.206744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.206872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.206968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.207192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.207481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.207762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.207878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.207982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.208205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.208438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.208675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.208826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.364 qpair failed and we were unable to recover it. 00:33:33.364 [2024-05-15 02:01:57.208942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.364 [2024-05-15 02:01:57.209034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.209191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.209434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.209679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.209896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.209987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.210113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.210355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.210631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.210853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.210999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.211097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.211343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.211634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.211857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.211978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.212094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.212359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.212584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.212821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.212970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.213067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.213300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.213517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.213814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.213965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.214076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.214332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.214574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.214796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.214927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.215015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.215266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.215489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.215694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.215832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.215958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.216170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.216413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.216635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.216889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.216994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.217123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.217356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.217576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.217806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.217960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.218057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.218286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.218543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.365 qpair failed and we were unable to recover it. 00:33:33.365 [2024-05-15 02:01:57.218799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.365 [2024-05-15 02:01:57.218887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.218913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.219006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.219214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.219481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.219718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.219843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.219940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.220151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.220403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.220653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.220801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.220919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.221169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.221404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.221663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.221879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.221994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.222121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.222397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.222624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.222871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.222985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.223078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.223330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.223581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.223804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.223927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.224017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.224236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.224455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.224671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.224823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.224921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.225146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.225379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.225621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.225890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.225984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.226128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.226369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.226620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.226828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.226949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.227076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.227319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.227563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.227799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.227920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.228038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.228286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.228506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.366 [2024-05-15 02:01:57.228737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.366 [2024-05-15 02:01:57.228863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.366 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.228960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.229170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.229421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.229652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.229867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.229984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.230089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.230365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.230618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.230852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.230979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.231070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.231311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.231563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.231801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.231953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.232078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.232196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.232229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.232356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.232478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.232517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.232625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.232775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.232814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.232940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.233228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.233473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.233733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.233882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.234006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.234112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.234140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.234267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.234370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.234409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.367 [2024-05-15 02:01:57.234532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.234641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.367 [2024-05-15 02:01:57.234670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.367 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.234804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.234924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.234951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.235048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.235277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.235544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.235793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.235941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.236076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.236331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.236599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.236817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.236938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.237041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.237296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.237519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.237743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.237868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.237974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.238224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.238463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.238688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.238839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.238957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.239209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.239472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.239702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.239834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.239939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.240030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.240057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.240155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.240252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.240280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.628 qpair failed and we were unable to recover it. 00:33:33.628 [2024-05-15 02:01:57.240385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.240509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.628 [2024-05-15 02:01:57.240535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.240649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.240753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.240779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.240893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.240992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.241107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.241369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.241617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.241848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.241962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.242110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.242340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.242570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.242835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.242952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.243053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.243325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.243552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.243780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.243928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.244024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.244268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.244513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.244760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.244901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.245031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.245311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.245538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.245817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.245961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.246056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.246321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.246590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.246803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.246971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.247066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.247162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.247190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.629 qpair failed and we were unable to recover it. 00:33:33.629 [2024-05-15 02:01:57.247301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.629 [2024-05-15 02:01:57.247401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.247427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.247525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.247621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.247647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.247751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.247869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.247895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.248042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.248285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.248536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.248796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.248928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.249035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.249315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.249574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.249820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.249941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.250060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.250316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.250560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.250794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.250918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.251034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.251252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.251492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.251739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.251889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.251981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.252200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.252450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.252688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.252819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.252918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2114000b90 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.253187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.253445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.253689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.253841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.253987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.254103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.254130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.630 qpair failed and we were unable to recover it. 00:33:33.630 [2024-05-15 02:01:57.254256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.254378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.630 [2024-05-15 02:01:57.254404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.254530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.254628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.254654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.254779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.254926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.254952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.255054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.255286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.255530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.255769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.255899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.256018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.256269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.256478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.256723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.256867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.256989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.257199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.257444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.257735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.257852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.257951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.258231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.258444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.258711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.258857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.258947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.259186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.259438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.259686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.259804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.259933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.260173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.260459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.260713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.260859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.260980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.261074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.261099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.261250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.261348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.261374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.631 qpair failed and we were unable to recover it. 00:33:33.631 [2024-05-15 02:01:57.261496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.631 [2024-05-15 02:01:57.261615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.261641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.261758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.261856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.261881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.261968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.262203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.262421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.262698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.262848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.262992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.263210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.263441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.263655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.263806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.263908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.264157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.264373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.264598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.264843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.264960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.265076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.265194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.265230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.265337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.265439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.265465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 [2024-05-15 02:01:57.265627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.265728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.265753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7c570 with addr=10.0.0.2, port=4420 00:33:33.632 qpair failed and we were unable to recover it. 00:33:33.632 A controller has encountered a failure and is being reset. 00:33:33.632 [2024-05-15 02:01:57.265908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.266032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:33.632 [2024-05-15 02:01:57.266062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8a0f0 with addr=10.0.0.2, port=4420 00:33:33.632 [2024-05-15 02:01:57.266081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8a0f0 is same with the state(5) to be set 00:33:33.632 [2024-05-15 02:01:57.266108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8a0f0 (9): Bad file descriptor 00:33:33.632 [2024-05-15 02:01:57.266128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:33.632 [2024-05-15 02:01:57.266142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:33.632 [2024-05-15 02:01:57.266159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:33.632 Unable to reset the controller. 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.632 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.633 Malloc0 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.633 [2024-05-15 02:01:57.334466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.633 [2024-05-15 02:01:57.362467] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:33.633 [2024-05-15 02:01:57.362766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.633 02:01:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 21841 00:33:34.565 Controller properly reset. 00:33:39.832 Initializing NVMe Controllers 00:33:39.832 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:39.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:39.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:39.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:39.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:39.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:39.832 Initialization complete. Launching workers. 00:33:39.832 Starting thread on core 1 00:33:39.832 Starting thread on core 2 00:33:39.832 Starting thread on core 3 00:33:39.832 Starting thread on core 0 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:39.832 00:33:39.832 real 0m10.697s 00:33:39.832 user 0m33.746s 00:33:39.832 sys 0m7.346s 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:39.832 ************************************ 00:33:39.832 END TEST nvmf_target_disconnect_tc2 00:33:39.832 ************************************ 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:39.832 rmmod nvme_tcp 00:33:39.832 rmmod nvme_fabrics 00:33:39.832 rmmod nvme_keyring 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 22248 ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 22248 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 22248 ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 22248 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 22248 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 22248' 00:33:39.832 killing process with pid 22248 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 22248 00:33:39.832 [2024-05-15 02:02:03.285878] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 22248 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:39.832 02:02:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.731 02:02:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:41.731 00:33:41.731 real 0m15.869s 00:33:41.731 user 0m59.381s 00:33:41.731 sys 0m9.946s 00:33:41.731 02:02:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:41.731 02:02:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:41.731 ************************************ 00:33:41.731 END TEST nvmf_target_disconnect 00:33:41.731 ************************************ 00:33:41.731 02:02:05 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:33:41.731 02:02:05 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:41.731 02:02:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:41.731 02:02:05 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:41.731 00:33:41.731 real 26m53.486s 00:33:41.731 user 72m50.982s 00:33:41.731 sys 6m30.509s 00:33:41.731 02:02:05 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:41.731 02:02:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:41.731 ************************************ 00:33:41.732 END TEST nvmf_tcp 00:33:41.732 ************************************ 00:33:41.991 02:02:05 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:33:41.991 02:02:05 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:41.991 02:02:05 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:41.991 02:02:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:41.991 02:02:05 -- common/autotest_common.sh@10 -- # set +x 00:33:41.991 ************************************ 00:33:41.991 START TEST spdkcli_nvmf_tcp 00:33:41.991 ************************************ 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:41.991 * Looking for test storage... 00:33:41.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=23445 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 23445 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 23445 ']' 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:41.991 02:02:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:41.991 [2024-05-15 02:02:05.819685] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:33:41.991 [2024-05-15 02:02:05.819765] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23445 ] 00:33:41.991 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.991 [2024-05-15 02:02:05.891136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:42.250 [2024-05-15 02:02:05.984490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.250 [2024-05-15 02:02:05.984495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.250 02:02:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:42.250 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:42.250 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:42.250 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:42.250 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:42.250 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:42.250 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:42.250 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:42.250 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:42.250 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:42.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:42.250 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:42.250 ' 00:33:44.775 [2024-05-15 02:02:08.623311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.146 [2024-05-15 02:02:09.847097] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:46.146 [2024-05-15 02:02:09.847788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:48.671 [2024-05-15 02:02:12.114837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:50.584 [2024-05-15 02:02:14.052837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:51.957 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:51.957 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:51.957 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:51.957 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:51.957 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:51.957 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:51.957 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:51.957 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:51.957 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:51.958 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:51.958 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:51.958 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:51.958 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:51.958 02:02:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:52.215 02:02:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:52.215 02:02:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:52.215 02:02:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:52.215 02:02:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:52.215 02:02:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.473 02:02:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:52.473 02:02:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:52.473 02:02:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.473 02:02:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:52.473 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:52.473 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:52.473 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:52.473 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:52.473 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:52.473 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:52.473 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:52.473 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:52.473 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:52.473 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:52.473 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:52.473 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:52.473 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:52.473 ' 00:33:57.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:57.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:57.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:57.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:57.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:57.733 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:57.733 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:57.733 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:57.733 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:57.733 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:57.733 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:57.733 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:57.733 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:57.733 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 23445 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 23445 ']' 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 23445 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 23445 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 23445' 00:33:57.733 killing process with pid 23445 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 23445 00:33:57.733 [2024-05-15 02:02:21.486638] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:57.733 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 23445 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 23445 ']' 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 23445 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 23445 ']' 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 23445 00:33:57.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (23445) - No such process 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 23445 is not found' 00:33:57.991 Process with pid 23445 is not found 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:57.991 00:33:57.991 real 0m15.991s 00:33:57.991 user 0m33.780s 00:33:57.991 sys 0m0.806s 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:57.991 02:02:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.991 ************************************ 00:33:57.991 END TEST spdkcli_nvmf_tcp 00:33:57.991 ************************************ 00:33:57.991 02:02:21 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:57.991 02:02:21 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:57.991 02:02:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:57.991 02:02:21 -- common/autotest_common.sh@10 -- # set +x 00:33:57.991 ************************************ 00:33:57.991 START TEST nvmf_identify_passthru 00:33:57.991 ************************************ 00:33:57.991 02:02:21 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:57.991 * Looking for test storage... 00:33:57.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:57.991 02:02:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.991 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.991 02:02:21 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.991 02:02:21 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.991 02:02:21 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.991 02:02:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:57.992 02:02:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.992 02:02:21 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.992 02:02:21 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.992 02:02:21 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:57.992 02:02:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.992 02:02:21 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.992 02:02:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:57.992 02:02:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:57.992 02:02:21 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:33:57.992 02:02:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:00.518 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:00.519 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:00.519 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:00.519 Found net devices under 0000:09:00.0: cvl_0_0 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:00.519 Found net devices under 0000:09:00.1: cvl_0_1 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:00.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:34:00.519 00:34:00.519 --- 10.0.0.2 ping statistics --- 00:34:00.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.519 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:34:00.519 00:34:00.519 --- 10.0.0.1 ping statistics --- 00:34:00.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.519 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:00.519 02:02:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:0b:00.0 00:34:00.519 02:02:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:0b:00.0 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:00.519 02:02:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:00.519 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.702 02:02:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:04.702 02:02:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:04.702 02:02:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:04.702 02:02:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:04.702 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=28348 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:08.884 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 28348 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 28348 ']' 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:08.884 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.884 [2024-05-15 02:02:32.678168] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:34:08.884 [2024-05-15 02:02:32.678268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.884 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.884 [2024-05-15 02:02:32.752668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:09.141 [2024-05-15 02:02:32.839308] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.141 [2024-05-15 02:02:32.839361] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.141 [2024-05-15 02:02:32.839385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.141 [2024-05-15 02:02:32.839396] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.141 [2024-05-15 02:02:32.839406] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.141 [2024-05-15 02:02:32.839461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.141 [2024-05-15 02:02:32.839544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.141 [2024-05-15 02:02:32.839598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:09.141 [2024-05-15 02:02:32.839600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.141 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:09.141 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:34:09.141 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:09.141 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.141 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.141 INFO: Log level set to 20 00:34:09.141 INFO: Requests: 00:34:09.142 { 00:34:09.142 "jsonrpc": "2.0", 00:34:09.142 "method": "nvmf_set_config", 00:34:09.142 "id": 1, 00:34:09.142 "params": { 00:34:09.142 "admin_cmd_passthru": { 00:34:09.142 "identify_ctrlr": true 00:34:09.142 } 00:34:09.142 } 00:34:09.142 } 00:34:09.142 00:34:09.142 INFO: response: 00:34:09.142 { 00:34:09.142 "jsonrpc": "2.0", 00:34:09.142 "id": 1, 00:34:09.142 "result": true 00:34:09.142 } 00:34:09.142 00:34:09.142 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.142 02:02:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:09.142 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.142 02:02:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 INFO: Setting log level to 20 00:34:09.142 INFO: Setting log level to 20 00:34:09.142 INFO: Log level set to 20 00:34:09.142 INFO: Log level set to 20 00:34:09.142 INFO: Requests: 00:34:09.142 { 00:34:09.142 "jsonrpc": "2.0", 00:34:09.142 "method": "framework_start_init", 00:34:09.142 "id": 1 00:34:09.142 } 00:34:09.142 00:34:09.142 INFO: Requests: 00:34:09.142 { 00:34:09.142 "jsonrpc": "2.0", 00:34:09.142 "method": "framework_start_init", 00:34:09.142 "id": 1 00:34:09.142 } 00:34:09.142 00:34:09.142 [2024-05-15 02:02:33.001604] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:09.142 INFO: response: 00:34:09.142 { 00:34:09.142 "jsonrpc": "2.0", 00:34:09.142 "id": 1, 00:34:09.142 "result": true 00:34:09.142 } 00:34:09.142 00:34:09.142 INFO: response: 00:34:09.142 { 00:34:09.142 "jsonrpc": "2.0", 00:34:09.142 "id": 1, 00:34:09.142 "result": true 00:34:09.142 } 00:34:09.142 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.142 02:02:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 INFO: Setting log level to 40 00:34:09.142 INFO: Setting log level to 40 00:34:09.142 INFO: Setting log level to 40 00:34:09.142 [2024-05-15 02:02:33.011586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.142 02:02:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.142 02:02:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.142 02:02:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.417 Nvme0n1 00:34:12.417 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.418 [2024-05-15 02:02:35.915279] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:12.418 [2024-05-15 02:02:35.915609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.418 [ 00:34:12.418 { 00:34:12.418 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:12.418 "subtype": "Discovery", 00:34:12.418 "listen_addresses": [], 00:34:12.418 "allow_any_host": true, 00:34:12.418 "hosts": [] 00:34:12.418 }, 00:34:12.418 { 00:34:12.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:12.418 "subtype": "NVMe", 00:34:12.418 "listen_addresses": [ 00:34:12.418 { 00:34:12.418 "trtype": "TCP", 00:34:12.418 "adrfam": "IPv4", 00:34:12.418 "traddr": "10.0.0.2", 00:34:12.418 "trsvcid": "4420" 00:34:12.418 } 00:34:12.418 ], 00:34:12.418 "allow_any_host": true, 00:34:12.418 "hosts": [], 00:34:12.418 "serial_number": "SPDK00000000000001", 00:34:12.418 "model_number": "SPDK bdev Controller", 00:34:12.418 "max_namespaces": 1, 00:34:12.418 "min_cntlid": 1, 00:34:12.418 "max_cntlid": 65519, 00:34:12.418 "namespaces": [ 00:34:12.418 { 00:34:12.418 "nsid": 1, 00:34:12.418 "bdev_name": "Nvme0n1", 00:34:12.418 "name": "Nvme0n1", 00:34:12.418 "nguid": "5C646D8817054AEEAA5883F2C4ADCA53", 00:34:12.418 "uuid": "5c646d88-1705-4aee-aa58-83f2c4adca53" 00:34:12.418 } 00:34:12.418 ] 00:34:12.418 } 00:34:12.418 ] 00:34:12.418 02:02:35 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:12.418 02:02:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:12.418 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:12.418 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:12.418 02:02:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.418 rmmod nvme_tcp 00:34:12.418 rmmod nvme_fabrics 00:34:12.418 rmmod nvme_keyring 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 28348 ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 28348 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 28348 ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 28348 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 28348 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 28348' 00:34:12.418 killing process with pid 28348 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 28348 00:34:12.418 [2024-05-15 02:02:36.310169] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:12.418 02:02:36 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 28348 00:34:14.314 02:02:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:14.314 02:02:37 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:14.314 02:02:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:14.314 02:02:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:14.314 02:02:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:14.314 02:02:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.314 02:02:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:14.314 02:02:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.212 02:02:39 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:16.212 00:34:16.212 real 0m18.119s 00:34:16.212 user 0m26.254s 00:34:16.212 sys 0m2.545s 00:34:16.212 02:02:39 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:16.212 02:02:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:16.212 ************************************ 00:34:16.212 END TEST nvmf_identify_passthru 00:34:16.212 ************************************ 00:34:16.212 02:02:39 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:16.212 02:02:39 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:34:16.212 02:02:39 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:16.212 02:02:39 -- common/autotest_common.sh@10 -- # set +x 00:34:16.212 ************************************ 00:34:16.212 START TEST nvmf_dif 00:34:16.212 ************************************ 00:34:16.212 02:02:39 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:16.212 * Looking for test storage... 00:34:16.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.212 02:02:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.212 02:02:39 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.212 02:02:39 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.212 02:02:39 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.212 02:02:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.212 02:02:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.212 02:02:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.212 02:02:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:16.212 02:02:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:16.212 02:02:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:16.212 02:02:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:16.212 02:02:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:16.212 02:02:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:16.212 02:02:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.212 02:02:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:16.212 02:02:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:16.212 02:02:39 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:16.212 02:02:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:18.739 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:18.739 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:18.739 Found net devices under 0000:09:00.0: cvl_0_0 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:18.739 Found net devices under 0000:09:00.1: cvl_0_1 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.739 02:02:42 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:18.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:34:18.740 00:34:18.740 --- 10.0.0.2 ping statistics --- 00:34:18.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.740 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:34:18.740 00:34:18.740 --- 10.0.0.1 ping statistics --- 00:34:18.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.740 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:18.740 02:02:42 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:20.157 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:20.157 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:20.157 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:20.157 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:20.157 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:20.157 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:20.157 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:20.157 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:20.157 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:20.157 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:20.157 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:20.157 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:20.157 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:20.157 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:20.157 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:20.157 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:20.157 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:20.157 02:02:44 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:20.157 02:02:44 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:20.157 02:02:44 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.158 02:02:44 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=31996 00:34:20.158 02:02:44 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:20.158 02:02:44 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 31996 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 31996 ']' 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:20.158 02:02:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.415 [2024-05-15 02:02:44.101759] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:34:20.415 [2024-05-15 02:02:44.101829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.415 EAL: No free 2048 kB hugepages reported on node 1 00:34:20.415 [2024-05-15 02:02:44.175962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.415 [2024-05-15 02:02:44.259675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.415 [2024-05-15 02:02:44.259744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.415 [2024-05-15 02:02:44.259758] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.415 [2024-05-15 02:02:44.259770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.415 [2024-05-15 02:02:44.259780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.415 [2024-05-15 02:02:44.259821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:34:20.673 02:02:44 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 02:02:44 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.673 02:02:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:20.673 02:02:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 [2024-05-15 02:02:44.387278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.673 02:02:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 ************************************ 00:34:20.673 START TEST fio_dif_1_default 00:34:20.673 ************************************ 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 bdev_null0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:20.673 [2024-05-15 02:02:44.447350] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:20.673 [2024-05-15 02:02:44.447585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:20.673 { 00:34:20.673 "params": { 00:34:20.673 "name": "Nvme$subsystem", 00:34:20.673 "trtype": "$TEST_TRANSPORT", 00:34:20.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.673 "adrfam": "ipv4", 00:34:20.673 "trsvcid": "$NVMF_PORT", 00:34:20.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.673 "hdgst": ${hdgst:-false}, 00:34:20.673 "ddgst": ${ddgst:-false} 00:34:20.673 }, 00:34:20.673 "method": "bdev_nvme_attach_controller" 00:34:20.673 } 00:34:20.673 EOF 00:34:20.673 )") 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:20.673 02:02:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:20.674 "params": { 00:34:20.674 "name": "Nvme0", 00:34:20.674 "trtype": "tcp", 00:34:20.674 "traddr": "10.0.0.2", 00:34:20.674 "adrfam": "ipv4", 00:34:20.674 "trsvcid": "4420", 00:34:20.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.674 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:20.674 "hdgst": false, 00:34:20.674 "ddgst": false 00:34:20.674 }, 00:34:20.674 "method": "bdev_nvme_attach_controller" 00:34:20.674 }' 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:20.674 02:02:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:20.931 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:20.931 fio-3.35 00:34:20.931 Starting 1 thread 00:34:20.931 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.123 00:34:33.123 filename0: (groupid=0, jobs=1): err= 0: pid=32223: Wed May 15 02:02:55 2024 00:34:33.123 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:34:33.123 slat (nsec): min=5598, max=68499, avg=8853.40, stdev=3451.66 00:34:33.123 clat (usec): min=40800, max=47644, avg=41008.05, stdev=435.52 00:34:33.123 lat (usec): min=40807, max=47684, avg=41016.91, stdev=435.90 00:34:33.123 clat percentiles (usec): 00:34:33.123 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:33.123 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:33.123 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:33.123 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:34:33.123 | 99.99th=[47449] 00:34:33.123 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:34:33.123 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:33.123 lat (msec) : 50=100.00% 00:34:33.123 cpu : usr=89.79%, sys=9.94%, ctx=21, majf=0, minf=264 00:34:33.123 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.123 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.123 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:33.123 00:34:33.123 Run status group 0 (all jobs): 00:34:33.123 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.123 00:34:33.123 real 0m11.246s 00:34:33.123 user 0m10.223s 00:34:33.123 sys 0m1.258s 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:33.123 ************************************ 00:34:33.123 END TEST fio_dif_1_default 00:34:33.123 ************************************ 00:34:33.123 02:02:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:33.123 02:02:55 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:34:33.123 02:02:55 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:33.123 02:02:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:33.123 ************************************ 00:34:33.123 START TEST fio_dif_1_multi_subsystems 00:34:33.123 ************************************ 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:33.123 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 bdev_null0 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 [2024-05-15 02:02:55.745850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 bdev_null1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.124 { 00:34:33.124 "params": { 00:34:33.124 "name": "Nvme$subsystem", 00:34:33.124 "trtype": "$TEST_TRANSPORT", 00:34:33.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.124 "adrfam": "ipv4", 00:34:33.124 "trsvcid": "$NVMF_PORT", 00:34:33.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.124 "hdgst": ${hdgst:-false}, 00:34:33.124 "ddgst": ${ddgst:-false} 00:34:33.124 }, 00:34:33.124 "method": "bdev_nvme_attach_controller" 00:34:33.124 } 00:34:33.124 EOF 00:34:33.124 )") 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.124 { 00:34:33.124 "params": { 00:34:33.124 "name": "Nvme$subsystem", 00:34:33.124 "trtype": "$TEST_TRANSPORT", 00:34:33.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.124 "adrfam": "ipv4", 00:34:33.124 "trsvcid": "$NVMF_PORT", 00:34:33.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.124 "hdgst": ${hdgst:-false}, 00:34:33.124 "ddgst": ${ddgst:-false} 00:34:33.124 }, 00:34:33.124 "method": "bdev_nvme_attach_controller" 00:34:33.124 } 00:34:33.124 EOF 00:34:33.124 )") 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:33.124 "params": { 00:34:33.124 "name": "Nvme0", 00:34:33.124 "trtype": "tcp", 00:34:33.124 "traddr": "10.0.0.2", 00:34:33.124 "adrfam": "ipv4", 00:34:33.124 "trsvcid": "4420", 00:34:33.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:33.124 "hdgst": false, 00:34:33.124 "ddgst": false 00:34:33.124 }, 00:34:33.124 "method": "bdev_nvme_attach_controller" 00:34:33.124 },{ 00:34:33.124 "params": { 00:34:33.124 "name": "Nvme1", 00:34:33.124 "trtype": "tcp", 00:34:33.124 "traddr": "10.0.0.2", 00:34:33.124 "adrfam": "ipv4", 00:34:33.124 "trsvcid": "4420", 00:34:33.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.124 "hdgst": false, 00:34:33.124 "ddgst": false 00:34:33.124 }, 00:34:33.124 "method": "bdev_nvme_attach_controller" 00:34:33.124 }' 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:33.124 02:02:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.124 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:33.124 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:33.125 fio-3.35 00:34:33.125 Starting 2 threads 00:34:33.125 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.093 00:34:43.093 filename0: (groupid=0, jobs=1): err= 0: pid=33629: Wed May 15 02:03:06 2024 00:34:43.093 read: IOPS=190, BW=761KiB/s (779kB/s)(7616KiB/10011msec) 00:34:43.093 slat (nsec): min=7213, max=31427, avg=8880.29, stdev=2281.39 00:34:43.093 clat (usec): min=564, max=43112, avg=21002.30, stdev=20379.64 00:34:43.093 lat (usec): min=572, max=43144, avg=21011.18, stdev=20379.42 00:34:43.093 clat percentiles (usec): 00:34:43.093 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 619], 00:34:43.093 | 30.00th=[ 635], 40.00th=[ 652], 50.00th=[ 873], 60.00th=[41157], 00:34:43.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:43.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:43.093 | 99.99th=[43254] 00:34:43.093 bw ( KiB/s): min= 672, max= 832, per=50.06%, avg=760.00, stdev=32.63, samples=20 00:34:43.093 iops : min= 168, max= 208, avg=190.00, stdev= 8.16, samples=20 00:34:43.093 lat (usec) : 750=49.79%, 1000=0.21% 00:34:43.093 lat (msec) : 50=50.00% 00:34:43.093 cpu : usr=94.42%, sys=5.27%, ctx=24, majf=0, minf=136 00:34:43.093 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.093 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.093 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:43.093 filename1: (groupid=0, jobs=1): err= 0: pid=33630: Wed May 15 02:03:06 2024 00:34:43.093 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10005msec) 00:34:43.093 slat (nsec): min=7282, max=73796, avg=9082.96, stdev=2966.62 00:34:43.093 clat (usec): min=555, max=43067, avg=21077.79, stdev=20355.83 00:34:43.093 lat (usec): min=563, max=43099, avg=21086.87, stdev=20355.55 00:34:43.093 clat percentiles (usec): 00:34:43.093 | 1.00th=[ 594], 5.00th=[ 619], 10.00th=[ 619], 20.00th=[ 635], 00:34:43.093 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[41157], 60.00th=[41157], 00:34:43.093 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:43.093 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:43.093 | 99.99th=[43254] 00:34:43.093 bw ( KiB/s): min= 672, max= 768, per=49.79%, avg=756.80, stdev=26.01, samples=20 00:34:43.093 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:34:43.093 lat (usec) : 750=49.47%, 1000=0.32% 00:34:43.093 lat (msec) : 50=50.21% 00:34:43.093 cpu : usr=94.56%, sys=5.13%, ctx=14, majf=0, minf=168 00:34:43.093 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:43.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.093 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.093 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:43.093 00:34:43.093 Run status group 0 (all jobs): 00:34:43.093 READ: bw=1518KiB/s (1555kB/s), 758KiB/s-761KiB/s (776kB/s-779kB/s), io=14.8MiB (15.6MB), run=10005-10011msec 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.351 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.352 00:34:43.352 real 0m11.350s 00:34:43.352 user 0m20.298s 00:34:43.352 sys 0m1.346s 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 ************************************ 00:34:43.352 END TEST fio_dif_1_multi_subsystems 00:34:43.352 ************************************ 00:34:43.352 02:03:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:43.352 02:03:07 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:34:43.352 02:03:07 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 ************************************ 00:34:43.352 START TEST fio_dif_rand_params 00:34:43.352 ************************************ 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 bdev_null0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.352 [2024-05-15 02:03:07.152841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:43.352 { 00:34:43.352 "params": { 00:34:43.352 "name": "Nvme$subsystem", 00:34:43.352 "trtype": "$TEST_TRANSPORT", 00:34:43.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.352 "adrfam": "ipv4", 00:34:43.352 "trsvcid": "$NVMF_PORT", 00:34:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.352 "hdgst": ${hdgst:-false}, 00:34:43.352 "ddgst": ${ddgst:-false} 00:34:43.352 }, 00:34:43.352 "method": "bdev_nvme_attach_controller" 00:34:43.352 } 00:34:43.352 EOF 00:34:43.352 )") 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:43.352 "params": { 00:34:43.352 "name": "Nvme0", 00:34:43.352 "trtype": "tcp", 00:34:43.352 "traddr": "10.0.0.2", 00:34:43.352 "adrfam": "ipv4", 00:34:43.352 "trsvcid": "4420", 00:34:43.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:43.352 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:43.352 "hdgst": false, 00:34:43.352 "ddgst": false 00:34:43.352 }, 00:34:43.352 "method": "bdev_nvme_attach_controller" 00:34:43.352 }' 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:43.352 02:03:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.618 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:43.618 ... 00:34:43.618 fio-3.35 00:34:43.618 Starting 3 threads 00:34:43.618 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.190 00:34:50.190 filename0: (groupid=0, jobs=1): err= 0: pid=35456: Wed May 15 02:03:13 2024 00:34:50.190 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(130MiB/5005msec) 00:34:50.190 slat (nsec): min=5133, max=30310, avg=13554.87, stdev=1855.66 00:34:50.190 clat (usec): min=4524, max=62210, avg=14459.57, stdev=11685.91 00:34:50.190 lat (usec): min=4537, max=62223, avg=14473.12, stdev=11685.78 00:34:50.190 clat percentiles (usec): 00:34:50.191 | 1.00th=[ 4817], 5.00th=[ 5342], 10.00th=[ 7701], 20.00th=[ 8586], 00:34:50.191 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11863], 60.00th=[12256], 00:34:50.191 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14746], 95.00th=[50594], 00:34:50.191 | 99.00th=[54264], 99.50th=[54789], 99.90th=[59507], 99.95th=[62129], 00:34:50.191 | 99.99th=[62129] 00:34:50.191 bw ( KiB/s): min=18725, max=36096, per=31.48%, avg=26474.10, stdev=5244.79, samples=10 00:34:50.191 iops : min= 146, max= 282, avg=206.80, stdev=41.02, samples=10 00:34:50.191 lat (msec) : 10=27.77%, 20=63.26%, 50=3.57%, 100=5.40% 00:34:50.191 cpu : usr=92.87%, sys=6.57%, ctx=30, majf=0, minf=96 00:34:50.191 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.191 issued rwts: total=1037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.191 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.191 filename0: (groupid=0, jobs=1): err= 0: pid=35457: Wed May 15 02:03:13 2024 00:34:50.191 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(151MiB/5006msec) 00:34:50.191 slat (nsec): min=4759, max=30681, avg=14590.28, stdev=2403.23 00:34:50.191 clat (usec): min=4587, max=89140, avg=12390.99, stdev=9568.64 00:34:50.191 lat (usec): min=4600, max=89153, avg=12405.58, stdev=9568.63 00:34:50.191 clat percentiles (usec): 00:34:50.191 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 6521], 20.00th=[ 7963], 00:34:50.191 | 30.00th=[ 8717], 40.00th=[10290], 50.00th=[11076], 60.00th=[11600], 00:34:50.191 | 70.00th=[12125], 80.00th=[12649], 90.00th=[13304], 95.00th=[46400], 00:34:50.191 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[89654], 00:34:50.191 | 99.99th=[89654] 00:34:50.191 bw ( KiB/s): min=22272, max=43008, per=36.75%, avg=30903.60, stdev=6400.88, samples=10 00:34:50.191 iops : min= 174, max= 336, avg=241.40, stdev=50.06, samples=10 00:34:50.191 lat (msec) : 10=37.93%, 20=56.45%, 50=3.14%, 100=2.48% 00:34:50.191 cpu : usr=91.97%, sys=7.39%, ctx=9, majf=0, minf=145 00:34:50.191 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.191 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.191 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.191 filename0: (groupid=0, jobs=1): err= 0: pid=35458: Wed May 15 02:03:13 2024 00:34:50.191 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5005msec) 00:34:50.191 slat (nsec): min=4598, max=34965, avg=14001.40, stdev=2773.67 00:34:50.191 clat (usec): min=4760, max=60634, avg=14389.39, stdev=10699.45 00:34:50.191 lat (usec): min=4772, max=60648, avg=14403.39, stdev=10699.30 00:34:50.191 clat percentiles (usec): 00:34:50.191 | 1.00th=[ 5145], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 8979], 00:34:50.191 | 30.00th=[10159], 40.00th=[11338], 50.00th=[11994], 60.00th=[12518], 00:34:50.191 | 70.00th=[13435], 80.00th=[14877], 90.00th=[16188], 95.00th=[50594], 00:34:50.191 | 99.00th=[53216], 99.50th=[53740], 99.90th=[60556], 99.95th=[60556], 00:34:50.191 | 99.99th=[60556] 00:34:50.191 bw ( KiB/s): min=19200, max=32768, per=31.63%, avg=26598.40, stdev=3796.04, samples=10 00:34:50.191 iops : min= 150, max= 256, avg=207.80, stdev=29.66, samples=10 00:34:50.191 lat (msec) : 10=28.89%, 20=63.92%, 50=1.92%, 100=5.28% 00:34:50.191 cpu : usr=93.13%, sys=6.41%, ctx=10, majf=0, minf=117 00:34:50.191 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.191 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.191 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:50.191 00:34:50.191 Run status group 0 (all jobs): 00:34:50.191 READ: bw=82.1MiB/s (86.1MB/s), 25.9MiB/s-30.2MiB/s (27.2MB/s-31.7MB/s), io=411MiB (431MB), run=5005-5006msec 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 bdev_null0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 [2024-05-15 02:03:13.320265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 bdev_null1 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 bdev_null2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.191 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:50.192 { 00:34:50.192 "params": { 00:34:50.192 "name": "Nvme$subsystem", 00:34:50.192 "trtype": "$TEST_TRANSPORT", 00:34:50.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.192 "adrfam": "ipv4", 00:34:50.192 "trsvcid": "$NVMF_PORT", 00:34:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.192 "hdgst": ${hdgst:-false}, 00:34:50.192 "ddgst": ${ddgst:-false} 00:34:50.192 }, 00:34:50.192 "method": "bdev_nvme_attach_controller" 00:34:50.192 } 00:34:50.192 EOF 00:34:50.192 )") 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:50.192 { 00:34:50.192 "params": { 00:34:50.192 "name": "Nvme$subsystem", 00:34:50.192 "trtype": "$TEST_TRANSPORT", 00:34:50.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.192 "adrfam": "ipv4", 00:34:50.192 "trsvcid": "$NVMF_PORT", 00:34:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.192 "hdgst": ${hdgst:-false}, 00:34:50.192 "ddgst": ${ddgst:-false} 00:34:50.192 }, 00:34:50.192 "method": "bdev_nvme_attach_controller" 00:34:50.192 } 00:34:50.192 EOF 00:34:50.192 )") 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:50.192 { 00:34:50.192 "params": { 00:34:50.192 "name": "Nvme$subsystem", 00:34:50.192 "trtype": "$TEST_TRANSPORT", 00:34:50.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.192 "adrfam": "ipv4", 00:34:50.192 "trsvcid": "$NVMF_PORT", 00:34:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.192 "hdgst": ${hdgst:-false}, 00:34:50.192 "ddgst": ${ddgst:-false} 00:34:50.192 }, 00:34:50.192 "method": "bdev_nvme_attach_controller" 00:34:50.192 } 00:34:50.192 EOF 00:34:50.192 )") 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:50.192 "params": { 00:34:50.192 "name": "Nvme0", 00:34:50.192 "trtype": "tcp", 00:34:50.192 "traddr": "10.0.0.2", 00:34:50.192 "adrfam": "ipv4", 00:34:50.192 "trsvcid": "4420", 00:34:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.192 "hdgst": false, 00:34:50.192 "ddgst": false 00:34:50.192 }, 00:34:50.192 "method": "bdev_nvme_attach_controller" 00:34:50.192 },{ 00:34:50.192 "params": { 00:34:50.192 "name": "Nvme1", 00:34:50.192 "trtype": "tcp", 00:34:50.192 "traddr": "10.0.0.2", 00:34:50.192 "adrfam": "ipv4", 00:34:50.192 "trsvcid": "4420", 00:34:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:50.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:50.192 "hdgst": false, 00:34:50.192 "ddgst": false 00:34:50.192 }, 00:34:50.192 "method": "bdev_nvme_attach_controller" 00:34:50.192 },{ 00:34:50.192 "params": { 00:34:50.192 "name": "Nvme2", 00:34:50.192 "trtype": "tcp", 00:34:50.192 "traddr": "10.0.0.2", 00:34:50.192 "adrfam": "ipv4", 00:34:50.192 "trsvcid": "4420", 00:34:50.192 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:50.192 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:50.192 "hdgst": false, 00:34:50.192 "ddgst": false 00:34:50.192 }, 00:34:50.192 "method": "bdev_nvme_attach_controller" 00:34:50.192 }' 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:50.192 02:03:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.192 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:50.192 ... 00:34:50.192 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:50.192 ... 00:34:50.192 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:50.192 ... 00:34:50.192 fio-3.35 00:34:50.192 Starting 24 threads 00:34:50.192 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.406 00:35:02.406 filename0: (groupid=0, jobs=1): err= 0: pid=36389: Wed May 15 02:03:24 2024 00:35:02.406 read: IOPS=75, BW=302KiB/s (310kB/s)(3072KiB/10157msec) 00:35:02.406 slat (nsec): min=3545, max=43458, avg=12103.54, stdev=5153.58 00:35:02.406 clat (msec): min=3, max=289, avg=211.43, stdev=83.53 00:35:02.406 lat (msec): min=3, max=289, avg=211.44, stdev=83.53 00:35:02.406 clat percentiles (msec): 00:35:02.406 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 29], 20.00th=[ 171], 00:35:02.406 | 30.00th=[ 220], 40.00th=[ 234], 50.00th=[ 249], 60.00th=[ 251], 00:35:02.406 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:02.406 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:35:02.406 | 99.99th=[ 292] 00:35:02.406 bw ( KiB/s): min= 144, max= 1136, per=5.47%, avg=300.80, stdev=200.02, samples=20 00:35:02.406 iops : min= 36, max= 284, avg=75.20, stdev=50.00, samples=20 00:35:02.406 lat (msec) : 4=2.99%, 10=5.34%, 50=2.08%, 100=4.17%, 250=42.97% 00:35:02.406 lat (msec) : 500=42.45% 00:35:02.406 cpu : usr=98.33%, sys=1.29%, ctx=16, majf=0, minf=67 00:35:02.406 IO depths : 1=0.9%, 2=7.0%, 4=24.5%, 8=56.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:02.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.406 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.406 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.406 filename0: (groupid=0, jobs=1): err= 0: pid=36390: Wed May 15 02:03:24 2024 00:35:02.406 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10128msec) 00:35:02.406 slat (usec): min=8, max=122, avg=36.96, stdev=34.01 00:35:02.406 clat (msec): min=187, max=537, avg=289.05, stdev=68.84 00:35:02.406 lat (msec): min=187, max=537, avg=289.09, stdev=68.86 00:35:02.406 clat percentiles (msec): 00:35:02.406 | 1.00th=[ 218], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 236], 00:35:02.406 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 271], 00:35:02.406 | 70.00th=[ 279], 80.00th=[ 376], 90.00th=[ 405], 95.00th=[ 414], 00:35:02.406 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:35:02.406 | 99.99th=[ 542] 00:35:02.406 bw ( KiB/s): min= 128, max= 368, per=3.96%, avg=217.60, stdev=69.34, samples=20 00:35:02.406 iops : min= 32, max= 92, avg=54.40, stdev=17.33, samples=20 00:35:02.406 lat (msec) : 250=34.29%, 500=64.64%, 750=1.07% 00:35:02.406 cpu : usr=98.16%, sys=1.27%, ctx=33, majf=0, minf=30 00:35:02.407 IO depths : 1=1.6%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename0: (groupid=0, jobs=1): err= 0: pid=36391: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10151msec) 00:35:02.407 slat (usec): min=3, max=110, avg=39.42, stdev=32.34 00:35:02.407 clat (msec): min=4, max=480, avg=258.76, stdev=91.14 00:35:02.407 lat (msec): min=4, max=480, avg=258.80, stdev=91.15 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 90], 20.00th=[ 243], 00:35:02.407 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 268], 60.00th=[ 271], 00:35:02.407 | 70.00th=[ 279], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 380], 00:35:02.407 | 99.00th=[ 435], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:35:02.407 | 99.99th=[ 481] 00:35:02.407 bw ( KiB/s): min= 128, max= 640, per=4.43%, avg=243.20, stdev=106.33, samples=20 00:35:02.407 iops : min= 32, max= 160, avg=60.80, stdev=26.58, samples=20 00:35:02.407 lat (msec) : 10=2.56%, 20=2.56%, 50=2.56%, 100=2.56%, 250=11.86% 00:35:02.407 lat (msec) : 500=77.88% 00:35:02.407 cpu : usr=98.26%, sys=1.27%, ctx=20, majf=0, minf=41 00:35:02.407 IO depths : 1=1.6%, 2=5.1%, 4=16.7%, 8=65.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=91.8%, 8=2.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename0: (groupid=0, jobs=1): err= 0: pid=36392: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=51, BW=207KiB/s (212kB/s)(2096KiB/10148msec) 00:35:02.407 slat (usec): min=8, max=109, avg=43.67, stdev=34.39 00:35:02.407 clat (msec): min=151, max=450, avg=308.70, stdev=58.85 00:35:02.407 lat (msec): min=152, max=450, avg=308.74, stdev=58.87 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 153], 5.00th=[ 236], 10.00th=[ 249], 20.00th=[ 262], 00:35:02.407 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 334], 00:35:02.407 | 70.00th=[ 351], 80.00th=[ 376], 90.00th=[ 397], 95.00th=[ 405], 00:35:02.407 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 451], 99.95th=[ 451], 00:35:02.407 | 99.99th=[ 451] 00:35:02.407 bw ( KiB/s): min= 128, max= 256, per=3.70%, avg=203.20, stdev=61.88, samples=20 00:35:02.407 iops : min= 32, max= 64, avg=50.80, stdev=15.47, samples=20 00:35:02.407 lat (msec) : 250=12.60%, 500=87.40% 00:35:02.407 cpu : usr=98.33%, sys=1.21%, ctx=25, majf=0, minf=28 00:35:02.407 IO depths : 1=4.2%, 2=9.5%, 4=22.1%, 8=55.7%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename0: (groupid=0, jobs=1): err= 0: pid=36393: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=52, BW=212KiB/s (217kB/s)(2144KiB/10126msec) 00:35:02.407 slat (usec): min=8, max=176, avg=34.81, stdev=32.61 00:35:02.407 clat (msec): min=168, max=513, avg=299.69, stdev=61.33 00:35:02.407 lat (msec): min=168, max=513, avg=299.73, stdev=61.34 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 224], 5.00th=[ 228], 10.00th=[ 236], 20.00th=[ 255], 00:35:02.407 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 284], 00:35:02.407 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 384], 95.00th=[ 414], 00:35:02.407 | 99.00th=[ 460], 99.50th=[ 477], 99.90th=[ 514], 99.95th=[ 514], 00:35:02.407 | 99.99th=[ 514] 00:35:02.407 bw ( KiB/s): min= 128, max= 256, per=3.86%, avg=212.00, stdev=54.17, samples=20 00:35:02.407 iops : min= 32, max= 64, avg=53.00, stdev=13.54, samples=20 00:35:02.407 lat (msec) : 250=14.18%, 500=85.45%, 750=0.37% 00:35:02.407 cpu : usr=98.08%, sys=1.38%, ctx=39, majf=0, minf=46 00:35:02.407 IO depths : 1=1.7%, 2=4.9%, 4=15.5%, 8=67.0%, 16=11.0%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=91.3%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename0: (groupid=0, jobs=1): err= 0: pid=36394: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=62, BW=249KiB/s (255kB/s)(2528KiB/10136msec) 00:35:02.407 slat (usec): min=8, max=148, avg=14.38, stdev=11.83 00:35:02.407 clat (msec): min=160, max=399, avg=255.84, stdev=33.50 00:35:02.407 lat (msec): min=160, max=399, avg=255.86, stdev=33.50 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 184], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 230], 00:35:02.407 | 30.00th=[ 239], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 264], 00:35:02.407 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 309], 00:35:02.407 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:35:02.407 | 99.99th=[ 401] 00:35:02.407 bw ( KiB/s): min= 176, max= 256, per=4.48%, avg=246.40, stdev=25.11, samples=20 00:35:02.407 iops : min= 44, max= 64, avg=61.60, stdev= 6.28, samples=20 00:35:02.407 lat (msec) : 250=36.71%, 500=63.29% 00:35:02.407 cpu : usr=97.96%, sys=1.50%, ctx=27, majf=0, minf=49 00:35:02.407 IO depths : 1=0.8%, 2=3.6%, 4=14.6%, 8=69.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename0: (groupid=0, jobs=1): err= 0: pid=36395: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=63, BW=255KiB/s (262kB/s)(2592KiB/10146msec) 00:35:02.407 slat (usec): min=5, max=109, avg=20.60, stdev=19.98 00:35:02.407 clat (msec): min=44, max=410, avg=248.14, stdev=53.75 00:35:02.407 lat (msec): min=44, max=410, avg=248.16, stdev=53.75 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 45], 5.00th=[ 174], 10.00th=[ 213], 20.00th=[ 224], 00:35:02.407 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 259], 60.00th=[ 264], 00:35:02.407 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 284], 95.00th=[ 313], 00:35:02.407 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:35:02.407 | 99.99th=[ 409] 00:35:02.407 bw ( KiB/s): min= 144, max= 384, per=4.67%, avg=256.80, stdev=51.25, samples=20 00:35:02.407 iops : min= 36, max= 96, avg=64.20, stdev=12.81, samples=20 00:35:02.407 lat (msec) : 50=2.47%, 100=2.47%, 250=36.11%, 500=58.95% 00:35:02.407 cpu : usr=98.29%, sys=1.20%, ctx=47, majf=0, minf=42 00:35:02.407 IO depths : 1=2.5%, 2=7.1%, 4=19.9%, 8=60.3%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename0: (groupid=0, jobs=1): err= 0: pid=36396: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=54, BW=220KiB/s (225kB/s)(2224KiB/10116msec) 00:35:02.407 slat (nsec): min=5733, max=97902, avg=25113.03, stdev=27478.30 00:35:02.407 clat (msec): min=124, max=544, avg=290.75, stdev=57.50 00:35:02.407 lat (msec): min=124, max=544, avg=290.78, stdev=57.51 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 125], 5.00th=[ 226], 10.00th=[ 247], 20.00th=[ 255], 00:35:02.407 | 30.00th=[ 259], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 279], 00:35:02.407 | 70.00th=[ 313], 80.00th=[ 347], 90.00th=[ 380], 95.00th=[ 397], 00:35:02.407 | 99.00th=[ 409], 99.50th=[ 447], 99.90th=[ 542], 99.95th=[ 542], 00:35:02.407 | 99.99th=[ 542] 00:35:02.407 bw ( KiB/s): min= 128, max= 256, per=3.94%, avg=216.00, stdev=51.78, samples=20 00:35:02.407 iops : min= 32, max= 64, avg=54.00, stdev=12.95, samples=20 00:35:02.407 lat (msec) : 250=12.95%, 500=86.69%, 750=0.36% 00:35:02.407 cpu : usr=98.50%, sys=1.06%, ctx=37, majf=0, minf=32 00:35:02.407 IO depths : 1=2.2%, 2=6.5%, 4=19.1%, 8=61.9%, 16=10.4%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename1: (groupid=0, jobs=1): err= 0: pid=36397: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=63, BW=253KiB/s (259kB/s)(2568KiB/10148msec) 00:35:02.407 slat (usec): min=5, max=157, avg=19.59, stdev=22.19 00:35:02.407 clat (msec): min=42, max=397, avg=251.41, stdev=55.38 00:35:02.407 lat (msec): min=42, max=397, avg=251.43, stdev=55.38 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 43], 5.00th=[ 174], 10.00th=[ 215], 20.00th=[ 230], 00:35:02.407 | 30.00th=[ 241], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 268], 00:35:02.407 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 338], 00:35:02.407 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:35:02.407 | 99.99th=[ 397] 00:35:02.407 bw ( KiB/s): min= 176, max= 384, per=4.59%, avg=252.80, stdev=40.08, samples=20 00:35:02.407 iops : min= 44, max= 96, avg=63.20, stdev=10.02, samples=20 00:35:02.407 lat (msec) : 50=2.49%, 100=2.49%, 250=31.15%, 500=63.86% 00:35:02.407 cpu : usr=98.21%, sys=1.32%, ctx=52, majf=0, minf=40 00:35:02.407 IO depths : 1=0.8%, 2=4.5%, 4=17.3%, 8=65.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename1: (groupid=0, jobs=1): err= 0: pid=36398: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=44, BW=176KiB/s (181kB/s)(1784KiB/10115msec) 00:35:02.407 slat (usec): min=6, max=109, avg=36.83, stdev=32.64 00:35:02.407 clat (msec): min=176, max=559, avg=362.12, stdev=68.97 00:35:02.407 lat (msec): min=176, max=560, avg=362.16, stdev=68.98 00:35:02.407 clat percentiles (msec): 00:35:02.407 | 1.00th=[ 226], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 313], 00:35:02.407 | 30.00th=[ 334], 40.00th=[ 368], 50.00th=[ 376], 60.00th=[ 393], 00:35:02.407 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 426], 95.00th=[ 426], 00:35:02.407 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:35:02.407 | 99.99th=[ 558] 00:35:02.407 bw ( KiB/s): min= 112, max= 256, per=3.14%, avg=172.00, stdev=62.05, samples=20 00:35:02.407 iops : min= 28, max= 64, avg=43.00, stdev=15.51, samples=20 00:35:02.407 lat (msec) : 250=6.28%, 500=90.13%, 750=3.59% 00:35:02.407 cpu : usr=97.98%, sys=1.37%, ctx=41, majf=0, minf=40 00:35:02.407 IO depths : 1=3.6%, 2=9.9%, 4=25.1%, 8=52.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:02.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.407 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.407 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.407 filename1: (groupid=0, jobs=1): err= 0: pid=36399: Wed May 15 02:03:24 2024 00:35:02.407 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10130msec) 00:35:02.408 slat (usec): min=8, max=114, avg=15.64, stdev=12.51 00:35:02.408 clat (msec): min=163, max=405, avg=250.40, stdev=32.01 00:35:02.408 lat (msec): min=163, max=405, avg=250.41, stdev=32.01 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 165], 5.00th=[ 192], 10.00th=[ 213], 20.00th=[ 224], 00:35:02.408 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 257], 60.00th=[ 264], 00:35:02.408 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 288], 00:35:02.408 | 99.00th=[ 334], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:35:02.408 | 99.99th=[ 405] 00:35:02.408 bw ( KiB/s): min= 144, max= 368, per=4.59%, avg=252.00, stdev=41.16, samples=20 00:35:02.408 iops : min= 36, max= 92, avg=63.00, stdev=10.29, samples=20 00:35:02.408 lat (msec) : 250=44.89%, 500=55.11% 00:35:02.408 cpu : usr=97.97%, sys=1.43%, ctx=29, majf=0, minf=44 00:35:02.408 IO depths : 1=0.5%, 2=5.7%, 4=22.0%, 8=59.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename1: (groupid=0, jobs=1): err= 0: pid=36400: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10116msec) 00:35:02.408 slat (nsec): min=4166, max=71951, avg=29381.50, stdev=8193.70 00:35:02.408 clat (msec): min=187, max=543, avg=374.40, stdev=52.70 00:35:02.408 lat (msec): min=187, max=543, avg=374.43, stdev=52.70 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 253], 5.00th=[ 264], 10.00th=[ 309], 20.00th=[ 347], 00:35:02.408 | 30.00th=[ 359], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 388], 00:35:02.408 | 70.00th=[ 405], 80.00th=[ 409], 90.00th=[ 426], 95.00th=[ 451], 00:35:02.408 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:35:02.408 | 99.99th=[ 542] 00:35:02.408 bw ( KiB/s): min= 112, max= 272, per=3.03%, avg=166.40, stdev=62.16, samples=20 00:35:02.408 iops : min= 28, max= 68, avg=41.60, stdev=15.54, samples=20 00:35:02.408 lat (msec) : 250=0.93%, 500=96.76%, 750=2.31% 00:35:02.408 cpu : usr=97.79%, sys=1.47%, ctx=16, majf=0, minf=41 00:35:02.408 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename1: (groupid=0, jobs=1): err= 0: pid=36401: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=71, BW=285KiB/s (292kB/s)(2896KiB/10157msec) 00:35:02.408 slat (usec): min=4, max=115, avg=32.29, stdev=29.33 00:35:02.408 clat (msec): min=4, max=403, avg=223.68, stdev=81.52 00:35:02.408 lat (msec): min=4, max=403, avg=223.71, stdev=81.52 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 90], 20.00th=[ 184], 00:35:02.408 | 30.00th=[ 222], 40.00th=[ 230], 50.00th=[ 249], 60.00th=[ 259], 00:35:02.408 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 321], 00:35:02.408 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:35:02.408 | 99.99th=[ 405] 00:35:02.408 bw ( KiB/s): min= 176, max= 881, per=5.16%, avg=283.25, stdev=145.58, samples=20 00:35:02.408 iops : min= 44, max= 220, avg=70.80, stdev=36.34, samples=20 00:35:02.408 lat (msec) : 10=6.63%, 50=2.21%, 100=2.21%, 250=39.78%, 500=49.17% 00:35:02.408 cpu : usr=98.41%, sys=1.14%, ctx=24, majf=0, minf=60 00:35:02.408 IO depths : 1=0.7%, 2=1.8%, 4=9.1%, 8=76.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=89.5%, 8=5.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename1: (groupid=0, jobs=1): err= 0: pid=36402: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=55, BW=224KiB/s (229kB/s)(2264KiB/10116msec) 00:35:02.408 slat (usec): min=8, max=104, avg=31.90, stdev=32.35 00:35:02.408 clat (msec): min=135, max=405, avg=285.45, stdev=44.92 00:35:02.408 lat (msec): min=135, max=405, avg=285.48, stdev=44.94 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 136], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 259], 00:35:02.408 | 30.00th=[ 262], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 275], 00:35:02.408 | 70.00th=[ 284], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 368], 00:35:02.408 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 405], 99.95th=[ 405], 00:35:02.408 | 99.99th=[ 405] 00:35:02.408 bw ( KiB/s): min= 128, max= 256, per=4.01%, avg=220.00, stdev=54.17, samples=20 00:35:02.408 iops : min= 32, max= 64, avg=55.00, stdev=13.54, samples=20 00:35:02.408 lat (msec) : 250=8.13%, 500=91.87% 00:35:02.408 cpu : usr=98.27%, sys=1.18%, ctx=44, majf=0, minf=33 00:35:02.408 IO depths : 1=3.0%, 2=6.9%, 4=17.8%, 8=62.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=91.9%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename1: (groupid=0, jobs=1): err= 0: pid=36403: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10148msec) 00:35:02.408 slat (usec): min=4, max=118, avg=15.46, stdev=11.87 00:35:02.408 clat (msec): min=42, max=496, avg=241.41, stdev=56.22 00:35:02.408 lat (msec): min=42, max=496, avg=241.42, stdev=56.22 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 43], 5.00th=[ 161], 10.00th=[ 188], 20.00th=[ 220], 00:35:02.408 | 30.00th=[ 234], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:35:02.408 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 275], 95.00th=[ 284], 00:35:02.408 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 498], 99.95th=[ 498], 00:35:02.408 | 99.99th=[ 498] 00:35:02.408 bw ( KiB/s): min= 128, max= 384, per=4.78%, avg=262.40, stdev=63.87, samples=20 00:35:02.408 iops : min= 32, max= 96, avg=65.60, stdev=15.97, samples=20 00:35:02.408 lat (msec) : 50=2.38%, 100=2.38%, 250=49.11%, 500=46.13% 00:35:02.408 cpu : usr=98.13%, sys=1.44%, ctx=34, majf=0, minf=47 00:35:02.408 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename1: (groupid=0, jobs=1): err= 0: pid=36404: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10121msec) 00:35:02.408 slat (usec): min=8, max=114, avg=37.19, stdev=32.94 00:35:02.408 clat (msec): min=129, max=543, avg=297.15, stdev=61.03 00:35:02.408 lat (msec): min=129, max=543, avg=297.19, stdev=61.05 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 130], 5.00th=[ 234], 10.00th=[ 243], 20.00th=[ 259], 00:35:02.408 | 30.00th=[ 264], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:35:02.408 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 376], 95.00th=[ 426], 00:35:02.408 | 99.00th=[ 426], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:35:02.408 | 99.99th=[ 542] 00:35:02.408 bw ( KiB/s): min= 128, max= 256, per=3.85%, avg=211.20, stdev=59.55, samples=20 00:35:02.408 iops : min= 32, max= 64, avg=52.80, stdev=14.89, samples=20 00:35:02.408 lat (msec) : 250=14.34%, 500=84.93%, 750=0.74% 00:35:02.408 cpu : usr=97.74%, sys=1.49%, ctx=35, majf=0, minf=30 00:35:02.408 IO depths : 1=2.4%, 2=7.7%, 4=22.2%, 8=57.5%, 16=10.1%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename2: (groupid=0, jobs=1): err= 0: pid=36405: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10111msec) 00:35:02.408 slat (nsec): min=9118, max=87599, avg=30019.68, stdev=13104.68 00:35:02.408 clat (msec): min=252, max=530, avg=374.19, stdev=48.33 00:35:02.408 lat (msec): min=252, max=530, avg=374.22, stdev=48.33 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 253], 5.00th=[ 271], 10.00th=[ 309], 20.00th=[ 347], 00:35:02.408 | 30.00th=[ 359], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 388], 00:35:02.408 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 426], 95.00th=[ 443], 00:35:02.408 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 531], 99.95th=[ 531], 00:35:02.408 | 99.99th=[ 531] 00:35:02.408 bw ( KiB/s): min= 128, max= 256, per=3.03%, avg=166.40, stdev=60.18, samples=20 00:35:02.408 iops : min= 32, max= 64, avg=41.60, stdev=15.05, samples=20 00:35:02.408 lat (msec) : 500=98.61%, 750=1.39% 00:35:02.408 cpu : usr=98.34%, sys=1.26%, ctx=17, majf=0, minf=36 00:35:02.408 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename2: (groupid=0, jobs=1): err= 0: pid=36406: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=42, BW=171KiB/s (175kB/s)(1728KiB/10111msec) 00:35:02.408 slat (usec): min=9, max=111, avg=36.42, stdev=20.30 00:35:02.408 clat (msec): min=240, max=529, avg=374.15, stdev=50.36 00:35:02.408 lat (msec): min=240, max=529, avg=374.19, stdev=50.36 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 253], 5.00th=[ 271], 10.00th=[ 309], 20.00th=[ 347], 00:35:02.408 | 30.00th=[ 355], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 388], 00:35:02.408 | 70.00th=[ 405], 80.00th=[ 409], 90.00th=[ 426], 95.00th=[ 443], 00:35:02.408 | 99.00th=[ 518], 99.50th=[ 523], 99.90th=[ 531], 99.95th=[ 531], 00:35:02.408 | 99.99th=[ 531] 00:35:02.408 bw ( KiB/s): min= 128, max= 256, per=3.03%, avg=166.40, stdev=60.18, samples=20 00:35:02.408 iops : min= 32, max= 64, avg=41.60, stdev=15.05, samples=20 00:35:02.408 lat (msec) : 250=0.46%, 500=97.69%, 750=1.85% 00:35:02.408 cpu : usr=98.32%, sys=1.27%, ctx=29, majf=0, minf=48 00:35:02.408 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename2: (groupid=0, jobs=1): err= 0: pid=36407: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=56, BW=225KiB/s (230kB/s)(2272KiB/10113msec) 00:35:02.408 slat (usec): min=8, max=105, avg=27.55, stdev=29.22 00:35:02.408 clat (msec): min=140, max=466, avg=284.32, stdev=48.44 00:35:02.408 lat (msec): min=140, max=466, avg=284.35, stdev=48.46 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 142], 5.00th=[ 236], 10.00th=[ 245], 20.00th=[ 257], 00:35:02.408 | 30.00th=[ 262], 40.00th=[ 264], 50.00th=[ 268], 60.00th=[ 275], 00:35:02.408 | 70.00th=[ 284], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 368], 00:35:02.408 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 468], 99.95th=[ 468], 00:35:02.408 | 99.99th=[ 468] 00:35:02.408 bw ( KiB/s): min= 128, max= 256, per=4.01%, avg=220.80, stdev=52.07, samples=20 00:35:02.408 iops : min= 32, max= 64, avg=55.20, stdev=13.02, samples=20 00:35:02.408 lat (msec) : 250=13.03%, 500=86.97% 00:35:02.408 cpu : usr=98.28%, sys=1.16%, ctx=27, majf=0, minf=34 00:35:02.408 IO depths : 1=1.9%, 2=4.8%, 4=14.4%, 8=68.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename2: (groupid=0, jobs=1): err= 0: pid=36408: Wed May 15 02:03:24 2024 00:35:02.408 read: IOPS=43, BW=172KiB/s (177kB/s)(1728KiB/10020msec) 00:35:02.408 slat (nsec): min=8446, max=84362, avg=14926.14, stdev=7915.81 00:35:02.408 clat (msec): min=216, max=524, avg=370.96, stdev=47.66 00:35:02.408 lat (msec): min=216, max=524, avg=370.97, stdev=47.66 00:35:02.408 clat percentiles (msec): 00:35:02.408 | 1.00th=[ 255], 5.00th=[ 259], 10.00th=[ 313], 20.00th=[ 342], 00:35:02.408 | 30.00th=[ 355], 40.00th=[ 368], 50.00th=[ 388], 60.00th=[ 393], 00:35:02.408 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 414], 95.00th=[ 435], 00:35:02.408 | 99.00th=[ 485], 99.50th=[ 506], 99.90th=[ 527], 99.95th=[ 527], 00:35:02.408 | 99.99th=[ 527] 00:35:02.408 bw ( KiB/s): min= 128, max= 256, per=3.03%, avg=166.40, stdev=58.59, samples=20 00:35:02.408 iops : min= 32, max= 64, avg=41.60, stdev=14.65, samples=20 00:35:02.408 lat (msec) : 250=0.46%, 500=98.61%, 750=0.93% 00:35:02.408 cpu : usr=98.09%, sys=1.27%, ctx=47, majf=0, minf=39 00:35:02.408 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:02.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.408 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.408 filename2: (groupid=0, jobs=1): err= 0: pid=36409: Wed May 15 02:03:24 2024 00:35:02.409 read: IOPS=55, BW=224KiB/s (229kB/s)(2264KiB/10124msec) 00:35:02.409 slat (usec): min=8, max=119, avg=32.76, stdev=32.30 00:35:02.409 clat (msec): min=137, max=494, avg=285.66, stdev=50.50 00:35:02.409 lat (msec): min=137, max=494, avg=285.69, stdev=50.52 00:35:02.409 clat percentiles (msec): 00:35:02.409 | 1.00th=[ 138], 5.00th=[ 234], 10.00th=[ 249], 20.00th=[ 257], 00:35:02.409 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 279], 00:35:02.409 | 70.00th=[ 284], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 376], 00:35:02.409 | 99.00th=[ 456], 99.50th=[ 464], 99.90th=[ 493], 99.95th=[ 493], 00:35:02.409 | 99.99th=[ 493] 00:35:02.409 bw ( KiB/s): min= 128, max= 256, per=3.99%, avg=220.00, stdev=50.83, samples=20 00:35:02.409 iops : min= 32, max= 64, avg=55.00, stdev=12.71, samples=20 00:35:02.409 lat (msec) : 250=11.31%, 500=88.69% 00:35:02.409 cpu : usr=98.38%, sys=1.20%, ctx=18, majf=0, minf=33 00:35:02.409 IO depths : 1=1.2%, 2=4.4%, 4=15.7%, 8=67.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:02.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 complete : 0=0.0%, 4=91.4%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.409 filename2: (groupid=0, jobs=1): err= 0: pid=36410: Wed May 15 02:03:24 2024 00:35:02.409 read: IOPS=62, BW=251KiB/s (257kB/s)(2544KiB/10130msec) 00:35:02.409 slat (nsec): min=8563, max=99652, avg=15537.10, stdev=13891.14 00:35:02.409 clat (msec): min=150, max=399, avg=254.22, stdev=27.76 00:35:02.409 lat (msec): min=150, max=400, avg=254.23, stdev=27.76 00:35:02.409 clat percentiles (msec): 00:35:02.409 | 1.00th=[ 205], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 232], 00:35:02.409 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:35:02.409 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 284], 00:35:02.409 | 99.00th=[ 342], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:35:02.409 | 99.99th=[ 401] 00:35:02.409 bw ( KiB/s): min= 176, max= 272, per=4.52%, avg=248.00, stdev=21.72, samples=20 00:35:02.409 iops : min= 44, max= 68, avg=62.00, stdev= 5.43, samples=20 00:35:02.409 lat (msec) : 250=37.11%, 500=62.89% 00:35:02.409 cpu : usr=98.24%, sys=1.33%, ctx=30, majf=0, minf=63 00:35:02.409 IO depths : 1=0.6%, 2=4.2%, 4=17.0%, 8=66.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:02.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.409 filename2: (groupid=0, jobs=1): err= 0: pid=36411: Wed May 15 02:03:24 2024 00:35:02.409 read: IOPS=67, BW=271KiB/s (278kB/s)(2752KiB/10150msec) 00:35:02.409 slat (usec): min=8, max=100, avg=22.84, stdev=23.76 00:35:02.409 clat (msec): min=24, max=398, avg=235.51, stdev=68.40 00:35:02.409 lat (msec): min=24, max=398, avg=235.53, stdev=68.39 00:35:02.409 clat percentiles (msec): 00:35:02.409 | 1.00th=[ 25], 5.00th=[ 81], 10.00th=[ 150], 20.00th=[ 186], 00:35:02.409 | 30.00th=[ 222], 40.00th=[ 234], 50.00th=[ 253], 60.00th=[ 262], 00:35:02.409 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 292], 95.00th=[ 342], 00:35:02.409 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:35:02.409 | 99.99th=[ 397] 00:35:02.409 bw ( KiB/s): min= 176, max= 560, per=4.89%, avg=268.80, stdev=81.85, samples=20 00:35:02.409 iops : min= 44, max= 140, avg=67.20, stdev=20.46, samples=20 00:35:02.409 lat (msec) : 50=2.33%, 100=4.51%, 250=40.55%, 500=52.62% 00:35:02.409 cpu : usr=98.35%, sys=1.21%, ctx=29, majf=0, minf=47 00:35:02.409 IO depths : 1=0.1%, 2=0.3%, 4=6.0%, 8=81.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:02.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 complete : 0=0.0%, 4=88.7%, 8=6.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.409 filename2: (groupid=0, jobs=1): err= 0: pid=36412: Wed May 15 02:03:24 2024 00:35:02.409 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10126msec) 00:35:02.409 slat (nsec): min=8558, max=40472, avg=13905.65, stdev=6107.32 00:35:02.409 clat (msec): min=144, max=305, avg=244.90, stdev=33.61 00:35:02.409 lat (msec): min=144, max=305, avg=244.91, stdev=33.61 00:35:02.409 clat percentiles (msec): 00:35:02.409 | 1.00th=[ 146], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 220], 00:35:02.409 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 264], 00:35:02.409 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 284], 00:35:02.409 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:35:02.409 | 99.99th=[ 305] 00:35:02.409 bw ( KiB/s): min= 144, max= 384, per=4.67%, avg=256.00, stdev=53.70, samples=20 00:35:02.409 iops : min= 36, max= 96, avg=64.00, stdev=13.42, samples=20 00:35:02.409 lat (msec) : 250=47.87%, 500=52.13% 00:35:02.409 cpu : usr=98.44%, sys=1.16%, ctx=19, majf=0, minf=31 00:35:02.409 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:02.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.409 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:02.409 00:35:02.409 Run status group 0 (all jobs): 00:35:02.409 READ: bw=5486KiB/s (5618kB/s), 171KiB/s-302KiB/s (175kB/s-310kB/s), io=54.4MiB (57.1MB), run=10020-10157msec 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 bdev_null0 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 [2024-05-15 02:03:25.045138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 bdev_null1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:02.409 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:02.409 { 00:35:02.409 "params": { 00:35:02.409 "name": "Nvme$subsystem", 00:35:02.409 "trtype": "$TEST_TRANSPORT", 00:35:02.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:02.410 "adrfam": "ipv4", 00:35:02.410 "trsvcid": "$NVMF_PORT", 00:35:02.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:02.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:02.410 "hdgst": ${hdgst:-false}, 00:35:02.410 "ddgst": ${ddgst:-false} 00:35:02.410 }, 00:35:02.410 "method": "bdev_nvme_attach_controller" 00:35:02.410 } 00:35:02.410 EOF 00:35:02.410 )") 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:02.410 { 00:35:02.410 "params": { 00:35:02.410 "name": "Nvme$subsystem", 00:35:02.410 "trtype": "$TEST_TRANSPORT", 00:35:02.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:02.410 "adrfam": "ipv4", 00:35:02.410 "trsvcid": "$NVMF_PORT", 00:35:02.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:02.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:02.410 "hdgst": ${hdgst:-false}, 00:35:02.410 "ddgst": ${ddgst:-false} 00:35:02.410 }, 00:35:02.410 "method": "bdev_nvme_attach_controller" 00:35:02.410 } 00:35:02.410 EOF 00:35:02.410 )") 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:02.410 "params": { 00:35:02.410 "name": "Nvme0", 00:35:02.410 "trtype": "tcp", 00:35:02.410 "traddr": "10.0.0.2", 00:35:02.410 "adrfam": "ipv4", 00:35:02.410 "trsvcid": "4420", 00:35:02.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.410 "hdgst": false, 00:35:02.410 "ddgst": false 00:35:02.410 }, 00:35:02.410 "method": "bdev_nvme_attach_controller" 00:35:02.410 },{ 00:35:02.410 "params": { 00:35:02.410 "name": "Nvme1", 00:35:02.410 "trtype": "tcp", 00:35:02.410 "traddr": "10.0.0.2", 00:35:02.410 "adrfam": "ipv4", 00:35:02.410 "trsvcid": "4420", 00:35:02.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:02.410 "hdgst": false, 00:35:02.410 "ddgst": false 00:35:02.410 }, 00:35:02.410 "method": "bdev_nvme_attach_controller" 00:35:02.410 }' 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:02.410 02:03:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:02.410 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:02.410 ... 00:35:02.410 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:02.410 ... 00:35:02.410 fio-3.35 00:35:02.410 Starting 4 threads 00:35:02.410 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.709 00:35:07.709 filename0: (groupid=0, jobs=1): err= 0: pid=37796: Wed May 15 02:03:31 2024 00:35:07.709 read: IOPS=1860, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5001msec) 00:35:07.709 slat (nsec): min=3825, max=67830, avg=22620.17, stdev=9936.84 00:35:07.709 clat (usec): min=875, max=9657, avg=4212.02, stdev=394.00 00:35:07.709 lat (usec): min=889, max=9670, avg=4234.64, stdev=394.13 00:35:07.709 clat percentiles (usec): 00:35:07.709 | 1.00th=[ 3195], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4080], 00:35:07.709 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:35:07.709 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:35:07.709 | 99.00th=[ 5866], 99.50th=[ 6718], 99.90th=[ 7046], 99.95th=[ 7373], 00:35:07.709 | 99.99th=[ 9634] 00:35:07.709 bw ( KiB/s): min=14784, max=14976, per=25.16%, avg=14878.22, stdev=69.02, samples=9 00:35:07.709 iops : min= 1848, max= 1872, avg=1859.78, stdev= 8.63, samples=9 00:35:07.709 lat (usec) : 1000=0.02% 00:35:07.709 lat (msec) : 2=0.44%, 4=7.98%, 10=91.55% 00:35:07.709 cpu : usr=95.88%, sys=3.60%, ctx=14, majf=0, minf=65 00:35:07.709 IO depths : 1=0.4%, 2=21.1%, 4=52.9%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 issued rwts: total=9305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.709 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:07.709 filename0: (groupid=0, jobs=1): err= 0: pid=37797: Wed May 15 02:03:31 2024 00:35:07.709 read: IOPS=1863, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5003msec) 00:35:07.709 slat (nsec): min=3987, max=98958, avg=20516.79, stdev=8651.69 00:35:07.709 clat (usec): min=777, max=8793, avg=4228.88, stdev=327.59 00:35:07.709 lat (usec): min=794, max=8823, avg=4249.40, stdev=327.42 00:35:07.709 clat percentiles (usec): 00:35:07.709 | 1.00th=[ 3392], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4113], 00:35:07.709 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:35:07.709 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4490], 00:35:07.709 | 99.00th=[ 5145], 99.50th=[ 6128], 99.90th=[ 7767], 99.95th=[ 8717], 00:35:07.709 | 99.99th=[ 8848] 00:35:07.709 bw ( KiB/s): min=14656, max=15200, per=25.19%, avg=14894.22, stdev=180.57, samples=9 00:35:07.709 iops : min= 1832, max= 1900, avg=1861.78, stdev=22.57, samples=9 00:35:07.709 lat (usec) : 1000=0.03% 00:35:07.709 lat (msec) : 2=0.12%, 4=8.64%, 10=91.21% 00:35:07.709 cpu : usr=94.60%, sys=4.86%, ctx=10, majf=0, minf=101 00:35:07.709 IO depths : 1=0.1%, 2=9.2%, 4=64.3%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 issued rwts: total=9322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.709 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:07.709 filename1: (groupid=0, jobs=1): err= 0: pid=37798: Wed May 15 02:03:31 2024 00:35:07.709 read: IOPS=1863, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5001msec) 00:35:07.709 slat (usec): min=3, max=115, avg=22.64, stdev=10.20 00:35:07.709 clat (usec): min=751, max=7748, avg=4206.51, stdev=429.92 00:35:07.709 lat (usec): min=765, max=7765, avg=4229.15, stdev=430.14 00:35:07.709 clat percentiles (usec): 00:35:07.709 | 1.00th=[ 2835], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4080], 00:35:07.709 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:35:07.709 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:35:07.709 | 99.00th=[ 5866], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7439], 00:35:07.709 | 99.99th=[ 7767] 00:35:07.709 bw ( KiB/s): min=14576, max=15392, per=25.19%, avg=14892.11, stdev=236.30, samples=9 00:35:07.709 iops : min= 1822, max= 1924, avg=1861.44, stdev=29.59, samples=9 00:35:07.709 lat (usec) : 1000=0.04% 00:35:07.709 lat (msec) : 2=0.54%, 4=9.19%, 10=90.24% 00:35:07.709 cpu : usr=94.50%, sys=4.44%, ctx=22, majf=0, minf=69 00:35:07.709 IO depths : 1=0.2%, 2=21.0%, 4=53.0%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 issued rwts: total=9319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.709 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:07.709 filename1: (groupid=0, jobs=1): err= 0: pid=37799: Wed May 15 02:03:31 2024 00:35:07.709 read: IOPS=1847, BW=14.4MiB/s (15.1MB/s)(72.8MiB/5041msec) 00:35:07.709 slat (nsec): min=3868, max=68852, avg=21394.65, stdev=9577.60 00:35:07.709 clat (usec): min=834, max=44559, avg=4232.03, stdev=883.32 00:35:07.709 lat (usec): min=850, max=44574, avg=4253.43, stdev=883.21 00:35:07.709 clat percentiles (usec): 00:35:07.709 | 1.00th=[ 3032], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4113], 00:35:07.709 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:35:07.709 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4490], 00:35:07.709 | 99.00th=[ 5866], 99.50th=[ 6521], 99.90th=[10159], 99.95th=[13566], 00:35:07.709 | 99.99th=[44303] 00:35:07.709 bw ( KiB/s): min=14640, max=15104, per=25.19%, avg=14894.40, stdev=141.40, samples=10 00:35:07.709 iops : min= 1830, max= 1888, avg=1861.80, stdev=17.67, samples=10 00:35:07.709 lat (usec) : 1000=0.02% 00:35:07.709 lat (msec) : 2=0.34%, 4=8.62%, 10=90.88%, 20=0.09%, 50=0.04% 00:35:07.709 cpu : usr=94.74%, sys=4.64%, ctx=9, majf=0, minf=74 00:35:07.709 IO depths : 1=0.2%, 2=21.0%, 4=52.9%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.709 issued rwts: total=9313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.709 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:07.709 00:35:07.709 Run status group 0 (all jobs): 00:35:07.709 READ: bw=57.7MiB/s (60.5MB/s), 14.4MiB/s-14.6MiB/s (15.1MB/s-15.3MB/s), io=291MiB (305MB), run=5001-5041msec 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.709 00:35:07.709 real 0m24.266s 00:35:07.709 user 4m36.029s 00:35:07.709 sys 0m5.956s 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:07.709 02:03:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:07.709 ************************************ 00:35:07.709 END TEST fio_dif_rand_params 00:35:07.709 ************************************ 00:35:07.709 02:03:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:07.709 02:03:31 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:35:07.709 02:03:31 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:07.709 02:03:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.709 ************************************ 00:35:07.709 START TEST fio_dif_digest 00:35:07.710 ************************************ 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.710 bdev_null0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:07.710 [2024-05-15 02:03:31.479964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:07.710 { 00:35:07.710 "params": { 00:35:07.710 "name": "Nvme$subsystem", 00:35:07.710 "trtype": "$TEST_TRANSPORT", 00:35:07.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.710 "adrfam": "ipv4", 00:35:07.710 "trsvcid": "$NVMF_PORT", 00:35:07.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.710 "hdgst": ${hdgst:-false}, 00:35:07.710 "ddgst": ${ddgst:-false} 00:35:07.710 }, 00:35:07.710 "method": "bdev_nvme_attach_controller" 00:35:07.710 } 00:35:07.710 EOF 00:35:07.710 )") 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:07.710 "params": { 00:35:07.710 "name": "Nvme0", 00:35:07.710 "trtype": "tcp", 00:35:07.710 "traddr": "10.0.0.2", 00:35:07.710 "adrfam": "ipv4", 00:35:07.710 "trsvcid": "4420", 00:35:07.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:07.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:07.710 "hdgst": true, 00:35:07.710 "ddgst": true 00:35:07.710 }, 00:35:07.710 "method": "bdev_nvme_attach_controller" 00:35:07.710 }' 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:07.710 02:03:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:07.974 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:07.974 ... 00:35:07.974 fio-3.35 00:35:07.974 Starting 3 threads 00:35:07.974 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.163 00:35:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=38666: Wed May 15 02:03:42 2024 00:35:20.163 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(246MiB/10048msec) 00:35:20.163 slat (nsec): min=4938, max=54590, avg=22915.17, stdev=5497.74 00:35:20.163 clat (usec): min=11762, max=50347, avg=15262.01, stdev=1517.45 00:35:20.163 lat (usec): min=11783, max=50373, avg=15284.92, stdev=1517.36 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13960], 20.00th=[14353], 00:35:20.163 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:35:20.163 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:35:20.163 | 99.00th=[17957], 99.50th=[18220], 99.90th=[49021], 99.95th=[50594], 00:35:20.163 | 99.99th=[50594] 00:35:20.163 bw ( KiB/s): min=24320, max=25856, per=34.13%, avg=25164.80, stdev=343.46, samples=20 00:35:20.163 iops : min= 190, max= 202, avg=196.60, stdev= 2.68, samples=20 00:35:20.163 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:35:20.163 cpu : usr=93.94%, sys=5.54%, ctx=24, majf=0, minf=87 00:35:20.163 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 issued rwts: total=1969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=38667: Wed May 15 02:03:42 2024 00:35:20.163 read: IOPS=190, BW=23.9MiB/s (25.0MB/s)(239MiB/10007msec) 00:35:20.163 slat (usec): min=5, max=135, avg=17.47, stdev= 5.89 00:35:20.163 clat (usec): min=7565, max=21156, avg=15696.15, stdev=1099.03 00:35:20.163 lat (usec): min=7586, max=21168, avg=15713.62, stdev=1098.87 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[13304], 5.00th=[13960], 10.00th=[14353], 20.00th=[14877], 00:35:20.163 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:35:20.163 | 70.00th=[16188], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:35:20.163 | 99.00th=[18482], 99.50th=[19006], 99.90th=[21103], 99.95th=[21103], 00:35:20.163 | 99.99th=[21103] 00:35:20.163 bw ( KiB/s): min=23808, max=25088, per=33.11%, avg=24411.95, stdev=379.34, samples=20 00:35:20.163 iops : min= 186, max= 196, avg=190.70, stdev= 2.99, samples=20 00:35:20.163 lat (msec) : 10=0.05%, 20=99.74%, 50=0.21% 00:35:20.163 cpu : usr=93.45%, sys=6.06%, ctx=31, majf=0, minf=203 00:35:20.163 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=38668: Wed May 15 02:03:42 2024 00:35:20.163 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(239MiB/10047msec) 00:35:20.163 slat (usec): min=4, max=111, avg=17.81, stdev= 5.62 00:35:20.163 clat (usec): min=11878, max=55349, avg=15744.01, stdev=1648.81 00:35:20.163 lat (usec): min=11892, max=55366, avg=15761.82, stdev=1648.68 00:35:20.163 clat percentiles (usec): 00:35:20.163 | 1.00th=[13304], 5.00th=[14091], 10.00th=[14484], 20.00th=[14877], 00:35:20.163 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:35:20.163 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:35:20.163 | 99.00th=[18482], 99.50th=[19006], 99.90th=[53740], 99.95th=[55313], 00:35:20.163 | 99.99th=[55313] 00:35:20.163 bw ( KiB/s): min=23808, max=25088, per=33.10%, avg=24409.60, stdev=425.74, samples=20 00:35:20.163 iops : min= 186, max= 196, avg=190.70, stdev= 3.33, samples=20 00:35:20.163 lat (msec) : 20=99.74%, 50=0.16%, 100=0.10% 00:35:20.163 cpu : usr=93.33%, sys=6.18%, ctx=29, majf=0, minf=130 00:35:20.163 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.163 issued rwts: total=1909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.163 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:20.163 00:35:20.163 Run status group 0 (all jobs): 00:35:20.163 READ: bw=72.0MiB/s (75.5MB/s), 23.8MiB/s-24.5MiB/s (24.9MB/s-25.7MB/s), io=724MiB (759MB), run=10007-10048msec 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:20.163 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.163 00:35:20.164 real 0m11.096s 00:35:20.164 user 0m29.147s 00:35:20.164 sys 0m2.045s 00:35:20.164 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:20.164 02:03:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:20.164 ************************************ 00:35:20.164 END TEST fio_dif_digest 00:35:20.164 ************************************ 00:35:20.164 02:03:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:20.164 02:03:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:20.164 rmmod nvme_tcp 00:35:20.164 rmmod nvme_fabrics 00:35:20.164 rmmod nvme_keyring 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 31996 ']' 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 31996 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 31996 ']' 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 31996 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 31996 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 31996' 00:35:20.164 killing process with pid 31996 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@966 -- # kill 31996 00:35:20.164 [2024-05-15 02:03:42.667917] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:20.164 02:03:42 nvmf_dif -- common/autotest_common.sh@971 -- # wait 31996 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:20.164 02:03:42 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.164 Waiting for block devices as requested 00:35:20.164 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.421 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.421 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.421 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.421 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.679 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.679 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.679 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:20.679 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:20.937 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.937 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.937 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:21.195 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:21.195 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:21.195 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:21.195 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:21.453 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.453 02:03:45 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:21.453 02:03:45 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:21.453 02:03:45 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:21.453 02:03:45 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:21.453 02:03:45 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.453 02:03:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:21.453 02:03:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.354 02:03:47 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:23.354 00:35:23.354 real 1m7.363s 00:35:23.354 user 6m33.108s 00:35:23.354 sys 0m17.575s 00:35:23.354 02:03:47 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:23.354 02:03:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.354 ************************************ 00:35:23.354 END TEST nvmf_dif 00:35:23.354 ************************************ 00:35:23.612 02:03:47 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:23.612 02:03:47 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:35:23.612 02:03:47 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:23.612 02:03:47 -- common/autotest_common.sh@10 -- # set +x 00:35:23.612 ************************************ 00:35:23.612 START TEST nvmf_abort_qd_sizes 00:35:23.612 ************************************ 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:23.612 * Looking for test storage... 00:35:23.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:23.612 02:03:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:26.137 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:26.137 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:26.137 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:26.138 Found net devices under 0000:09:00.0: cvl_0_0 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:26.138 Found net devices under 0000:09:00.1: cvl_0_1 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:26.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:35:26.138 00:35:26.138 --- 10.0.0.2 ping statistics --- 00:35:26.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.138 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:35:26.138 00:35:26.138 --- 10.0.0.1 ping statistics --- 00:35:26.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.138 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:26.138 02:03:49 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:27.071 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:27.071 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:27.330 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:27.330 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:27.330 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:27.330 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:27.330 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:27.330 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:27.330 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:28.265 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=43932 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 43932 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 43932 ']' 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:28.265 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:28.265 [2024-05-15 02:03:52.175830] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:35:28.265 [2024-05-15 02:03:52.175900] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.523 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.523 [2024-05-15 02:03:52.259195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:28.523 [2024-05-15 02:03:52.347069] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.523 [2024-05-15 02:03:52.347120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.523 [2024-05-15 02:03:52.347148] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.523 [2024-05-15 02:03:52.347159] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.523 [2024-05-15 02:03:52.347169] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.523 [2024-05-15 02:03:52.347259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.523 [2024-05-15 02:03:52.347325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.523 [2024-05-15 02:03:52.347298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:28.523 [2024-05-15 02:03:52.347322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:35:28.780 02:03:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:28.781 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:35:28.781 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:28.781 02:03:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:28.781 ************************************ 00:35:28.781 START TEST spdk_target_abort 00:35:28.781 ************************************ 00:35:28.781 02:03:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:35:28.781 02:03:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:28.781 02:03:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:35:28.781 02:03:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.781 02:03:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.059 spdk_targetn1 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.059 [2024-05-15 02:03:55.351329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.059 [2024-05-15 02:03:55.383336] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:32.059 [2024-05-15 02:03:55.383661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.059 02:03:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.059 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.645 Initializing NVMe Controllers 00:35:34.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:34.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:34.645 Initialization complete. Launching workers. 00:35:34.645 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12578, failed: 0 00:35:34.645 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1281, failed to submit 11297 00:35:34.645 success 717, unsuccess 564, failed 0 00:35:34.645 02:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:34.645 02:03:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:34.902 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.194 Initializing NVMe Controllers 00:35:38.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:38.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:38.194 Initialization complete. Launching workers. 00:35:38.194 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8765, failed: 0 00:35:38.194 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 7498 00:35:38.194 success 299, unsuccess 968, failed 0 00:35:38.194 02:04:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:38.194 02:04:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:38.195 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.475 Initializing NVMe Controllers 00:35:41.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:41.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:41.475 Initialization complete. Launching workers. 00:35:41.475 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31258, failed: 0 00:35:41.475 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2738, failed to submit 28520 00:35:41.475 success 502, unsuccess 2236, failed 0 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.475 02:04:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 43932 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 43932 ']' 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 43932 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 43932 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:42.406 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:42.407 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 43932' 00:35:42.407 killing process with pid 43932 00:35:42.407 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 43932 00:35:42.407 [2024-05-15 02:04:06.301902] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:42.407 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 43932 00:35:42.664 00:35:42.664 real 0m14.028s 00:35:42.664 user 0m53.189s 00:35:42.664 sys 0m2.511s 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:42.664 ************************************ 00:35:42.664 END TEST spdk_target_abort 00:35:42.664 ************************************ 00:35:42.664 02:04:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:42.664 02:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:35:42.664 02:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:42.664 02:04:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.664 ************************************ 00:35:42.664 START TEST kernel_target_abort 00:35:42.664 ************************************ 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.664 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:42.665 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:42.921 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:42.921 02:04:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:43.853 Waiting for block devices as requested 00:35:44.110 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:44.110 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:44.110 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:44.110 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:44.368 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:44.368 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:44.368 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:44.368 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:44.626 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:44.626 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:44.626 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:44.626 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:44.884 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:44.884 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:44.884 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:44.884 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:45.146 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:45.146 02:04:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:45.146 No valid GPT data, bailing 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:45.146 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:35:45.147 00:35:45.147 Discovery Log Number of Records 2, Generation counter 2 00:35:45.147 =====Discovery Log Entry 0====== 00:35:45.147 trtype: tcp 00:35:45.147 adrfam: ipv4 00:35:45.147 subtype: current discovery subsystem 00:35:45.147 treq: not specified, sq flow control disable supported 00:35:45.147 portid: 1 00:35:45.147 trsvcid: 4420 00:35:45.147 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:45.147 traddr: 10.0.0.1 00:35:45.147 eflags: none 00:35:45.147 sectype: none 00:35:45.147 =====Discovery Log Entry 1====== 00:35:45.147 trtype: tcp 00:35:45.147 adrfam: ipv4 00:35:45.147 subtype: nvme subsystem 00:35:45.147 treq: not specified, sq flow control disable supported 00:35:45.147 portid: 1 00:35:45.147 trsvcid: 4420 00:35:45.147 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:45.147 traddr: 10.0.0.1 00:35:45.147 eflags: none 00:35:45.147 sectype: none 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.147 02:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.147 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.425 Initializing NVMe Controllers 00:35:48.425 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:48.425 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:48.425 Initialization complete. Launching workers. 00:35:48.425 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44303, failed: 0 00:35:48.425 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44303, failed to submit 0 00:35:48.425 success 0, unsuccess 44303, failed 0 00:35:48.425 02:04:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.425 02:04:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:48.425 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.705 Initializing NVMe Controllers 00:35:51.705 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:51.705 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:51.705 Initialization complete. Launching workers. 00:35:51.705 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78870, failed: 0 00:35:51.705 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19870, failed to submit 59000 00:35:51.705 success 0, unsuccess 19870, failed 0 00:35:51.705 02:04:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.705 02:04:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.705 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.985 Initializing NVMe Controllers 00:35:54.985 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.985 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:54.985 Initialization complete. Launching workers. 00:35:54.985 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84872, failed: 0 00:35:54.985 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21214, failed to submit 63658 00:35:54.985 success 0, unsuccess 21214, failed 0 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:54.985 02:04:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:55.962 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:55.962 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:55.962 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:55.962 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:55.963 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:55.963 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:55.963 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:55.963 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:55.963 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:55.963 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:56.897 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:56.897 00:35:56.897 real 0m14.178s 00:35:56.897 user 0m6.223s 00:35:56.897 sys 0m3.280s 00:35:56.897 02:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:56.897 02:04:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.897 ************************************ 00:35:56.897 END TEST kernel_target_abort 00:35:56.897 ************************************ 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:56.897 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:56.897 rmmod nvme_tcp 00:35:56.897 rmmod nvme_fabrics 00:35:56.897 rmmod nvme_keyring 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 43932 ']' 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 43932 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 43932 ']' 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 43932 00:35:57.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (43932) - No such process 00:35:57.154 02:04:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 43932 is not found' 00:35:57.154 Process with pid 43932 is not found 00:35:57.155 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:57.155 02:04:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:58.529 Waiting for block devices as requested 00:35:58.529 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:58.529 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:58.529 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:58.529 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:58.529 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:58.529 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:58.786 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:58.786 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:58.786 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:58.786 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:59.044 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:59.044 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:59.044 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:59.301 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:59.301 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:59.301 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:59.301 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:59.559 02:04:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.461 02:04:25 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:01.461 00:36:01.461 real 0m38.010s 00:36:01.461 user 1m1.737s 00:36:01.461 sys 0m9.391s 00:36:01.461 02:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:01.461 02:04:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:01.461 ************************************ 00:36:01.461 END TEST nvmf_abort_qd_sizes 00:36:01.461 ************************************ 00:36:01.461 02:04:25 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:01.461 02:04:25 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:01.461 02:04:25 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:01.461 02:04:25 -- common/autotest_common.sh@10 -- # set +x 00:36:01.461 ************************************ 00:36:01.461 START TEST keyring_file 00:36:01.461 ************************************ 00:36:01.461 02:04:25 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:01.720 * Looking for test storage... 00:36:01.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:01.720 02:04:25 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:01.720 02:04:25 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.720 02:04:25 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.720 02:04:25 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.720 02:04:25 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.720 02:04:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.720 02:04:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.720 02:04:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.720 02:04:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:01.720 02:04:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.720 02:04:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rcWYOZTwqC 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rcWYOZTwqC 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rcWYOZTwqC 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rcWYOZTwqC 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7CMbPrcOnU 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:01.721 02:04:25 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7CMbPrcOnU 00:36:01.721 02:04:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7CMbPrcOnU 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.7CMbPrcOnU 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=50087 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:01.721 02:04:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 50087 00:36:01.721 02:04:25 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 50087 ']' 00:36:01.721 02:04:25 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.721 02:04:25 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:01.721 02:04:25 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.721 02:04:25 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:01.721 02:04:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:01.721 [2024-05-15 02:04:25.576027] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:01.721 [2024-05-15 02:04:25.576104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50087 ] 00:36:01.721 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.979 [2024-05-15 02:04:25.661944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.979 [2024-05-15 02:04:25.747367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.237 02:04:25 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:02.237 02:04:25 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:36:02.237 02:04:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:02.237 02:04:25 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.237 02:04:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:02.237 [2024-05-15 02:04:25.986883] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.237 null0 00:36:02.237 [2024-05-15 02:04:26.018907] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:02.237 [2024-05-15 02:04:26.018973] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:02.237 [2024-05-15 02:04:26.019506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:02.237 [2024-05-15 02:04:26.026951] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.237 02:04:26 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:02.237 [2024-05-15 02:04:26.038998] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:02.237 request: 00:36:02.237 { 00:36:02.237 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.237 "secure_channel": false, 00:36:02.237 "listen_address": { 00:36:02.237 "trtype": "tcp", 00:36:02.237 "traddr": "127.0.0.1", 00:36:02.237 "trsvcid": "4420" 00:36:02.237 }, 00:36:02.237 "method": "nvmf_subsystem_add_listener", 00:36:02.237 "req_id": 1 00:36:02.237 } 00:36:02.237 Got JSON-RPC error response 00:36:02.237 response: 00:36:02.237 { 00:36:02.237 "code": -32602, 00:36:02.237 "message": "Invalid parameters" 00:36:02.237 } 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:02.237 02:04:26 keyring_file -- keyring/file.sh@46 -- # bperfpid=50099 00:36:02.237 02:04:26 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:02.237 02:04:26 keyring_file -- keyring/file.sh@48 -- # waitforlisten 50099 /var/tmp/bperf.sock 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 50099 ']' 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:02.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:02.237 02:04:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:02.237 [2024-05-15 02:04:26.084698] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:02.237 [2024-05-15 02:04:26.084762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50099 ] 00:36:02.237 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.237 [2024-05-15 02:04:26.153899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.495 [2024-05-15 02:04:26.240632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.495 02:04:26 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:02.495 02:04:26 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:36:02.495 02:04:26 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:02.495 02:04:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:02.752 02:04:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7CMbPrcOnU 00:36:02.752 02:04:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7CMbPrcOnU 00:36:03.010 02:04:26 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:03.010 02:04:26 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:03.010 02:04:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.010 02:04:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.010 02:04:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:03.267 02:04:27 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.rcWYOZTwqC == \/\t\m\p\/\t\m\p\.\r\c\W\Y\O\Z\T\w\q\C ]] 00:36:03.267 02:04:27 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:03.267 02:04:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:03.267 02:04:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.267 02:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.267 02:04:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:03.524 02:04:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7CMbPrcOnU == \/\t\m\p\/\t\m\p\.\7\C\M\b\P\r\c\O\n\U ]] 00:36:03.524 02:04:27 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:03.524 02:04:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:03.524 02:04:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:03.524 02:04:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.524 02:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.525 02:04:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:03.782 02:04:27 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:03.782 02:04:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:03.782 02:04:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:03.782 02:04:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:03.782 02:04:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.782 02:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.782 02:04:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:04.040 02:04:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:04.040 02:04:27 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.040 02:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.297 [2024-05-15 02:04:28.078175] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:04.297 nvme0n1 00:36:04.297 02:04:28 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:04.297 02:04:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:04.297 02:04:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:04.297 02:04:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:04.297 02:04:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.297 02:04:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:04.554 02:04:28 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:04.554 02:04:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:04.554 02:04:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:04.554 02:04:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:04.554 02:04:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:04.554 02:04:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.554 02:04:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:04.811 02:04:28 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:04.811 02:04:28 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:05.067 Running I/O for 1 seconds... 00:36:06.001 00:36:06.002 Latency(us) 00:36:06.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.002 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:06.002 nvme0n1 : 1.01 7488.64 29.25 0.00 0.00 16998.53 8543.95 28544.57 00:36:06.002 =================================================================================================================== 00:36:06.002 Total : 7488.64 29.25 0.00 0.00 16998.53 8543.95 28544.57 00:36:06.002 0 00:36:06.002 02:04:29 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:06.002 02:04:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:06.259 02:04:30 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:06.259 02:04:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.259 02:04:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.259 02:04:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.259 02:04:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.259 02:04:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.516 02:04:30 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:06.516 02:04:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:06.516 02:04:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:06.516 02:04:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.516 02:04:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.516 02:04:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.516 02:04:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:06.774 02:04:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:06.774 02:04:30 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:06.774 02:04:30 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:06.774 02:04:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:07.031 [2024-05-15 02:04:30.791015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:07.031 [2024-05-15 02:04:30.791580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb1d0 (107): Transport endpoint is not connected 00:36:07.031 [2024-05-15 02:04:30.792574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb1d0 (9): Bad file descriptor 00:36:07.031 [2024-05-15 02:04:30.793572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:07.031 [2024-05-15 02:04:30.793594] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:07.031 [2024-05-15 02:04:30.793607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:07.031 request: 00:36:07.031 { 00:36:07.031 "name": "nvme0", 00:36:07.031 "trtype": "tcp", 00:36:07.031 "traddr": "127.0.0.1", 00:36:07.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.031 "adrfam": "ipv4", 00:36:07.031 "trsvcid": "4420", 00:36:07.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.031 "psk": "key1", 00:36:07.031 "method": "bdev_nvme_attach_controller", 00:36:07.031 "req_id": 1 00:36:07.031 } 00:36:07.031 Got JSON-RPC error response 00:36:07.031 response: 00:36:07.031 { 00:36:07.031 "code": -32602, 00:36:07.031 "message": "Invalid parameters" 00:36:07.031 } 00:36:07.031 02:04:30 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:36:07.031 02:04:30 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:07.031 02:04:30 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:07.031 02:04:30 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:07.031 02:04:30 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:07.031 02:04:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:07.031 02:04:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:07.031 02:04:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.031 02:04:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.031 02:04:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:07.288 02:04:31 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:07.288 02:04:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:07.288 02:04:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:07.288 02:04:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:07.288 02:04:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.288 02:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.288 02:04:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:07.545 02:04:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:07.545 02:04:31 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:07.545 02:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:07.802 02:04:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:07.802 02:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:08.060 02:04:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:08.060 02:04:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:08.060 02:04:31 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:08.318 02:04:32 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:08.318 02:04:32 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rcWYOZTwqC 00:36:08.318 02:04:32 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:08.318 02:04:32 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:08.318 02:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:08.576 [2024-05-15 02:04:32.308616] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rcWYOZTwqC': 0100660 00:36:08.576 [2024-05-15 02:04:32.308656] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:08.576 request: 00:36:08.576 { 00:36:08.576 "name": "key0", 00:36:08.576 "path": "/tmp/tmp.rcWYOZTwqC", 00:36:08.576 "method": "keyring_file_add_key", 00:36:08.576 "req_id": 1 00:36:08.576 } 00:36:08.576 Got JSON-RPC error response 00:36:08.576 response: 00:36:08.576 { 00:36:08.576 "code": -1, 00:36:08.576 "message": "Operation not permitted" 00:36:08.576 } 00:36:08.576 02:04:32 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:36:08.576 02:04:32 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:08.576 02:04:32 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:08.576 02:04:32 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:08.576 02:04:32 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rcWYOZTwqC 00:36:08.576 02:04:32 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:08.576 02:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rcWYOZTwqC 00:36:08.833 02:04:32 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rcWYOZTwqC 00:36:08.833 02:04:32 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:08.833 02:04:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:08.833 02:04:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:08.833 02:04:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:08.833 02:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:08.833 02:04:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:09.091 02:04:32 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:09.091 02:04:32 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:09.091 02:04:32 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:09.091 02:04:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:09.348 [2024-05-15 02:04:33.038613] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rcWYOZTwqC': No such file or directory 00:36:09.348 [2024-05-15 02:04:33.038652] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:09.348 [2024-05-15 02:04:33.038693] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:09.348 [2024-05-15 02:04:33.038707] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:09.348 [2024-05-15 02:04:33.038721] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:09.348 request: 00:36:09.348 { 00:36:09.348 "name": "nvme0", 00:36:09.348 "trtype": "tcp", 00:36:09.348 "traddr": "127.0.0.1", 00:36:09.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.348 "adrfam": "ipv4", 00:36:09.348 "trsvcid": "4420", 00:36:09.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.348 "psk": "key0", 00:36:09.348 "method": "bdev_nvme_attach_controller", 00:36:09.348 "req_id": 1 00:36:09.348 } 00:36:09.348 Got JSON-RPC error response 00:36:09.348 response: 00:36:09.348 { 00:36:09.348 "code": -19, 00:36:09.348 "message": "No such device" 00:36:09.348 } 00:36:09.348 02:04:33 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:36:09.348 02:04:33 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:09.348 02:04:33 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:09.348 02:04:33 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:09.348 02:04:33 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:09.348 02:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:09.605 02:04:33 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VfNmgmP9j8 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:09.605 02:04:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:09.605 02:04:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:09.605 02:04:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:09.605 02:04:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:09.605 02:04:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:09.605 02:04:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VfNmgmP9j8 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VfNmgmP9j8 00:36:09.605 02:04:33 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.VfNmgmP9j8 00:36:09.605 02:04:33 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfNmgmP9j8 00:36:09.605 02:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VfNmgmP9j8 00:36:09.862 02:04:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:09.862 02:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:10.119 nvme0n1 00:36:10.119 02:04:33 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:10.120 02:04:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:10.120 02:04:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.120 02:04:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.120 02:04:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.120 02:04:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.376 02:04:34 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:10.376 02:04:34 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:10.377 02:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:10.635 02:04:34 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:10.635 02:04:34 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:10.635 02:04:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.635 02:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.635 02:04:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.893 02:04:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:10.893 02:04:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:10.893 02:04:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:10.893 02:04:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.893 02:04:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.893 02:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.893 02:04:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:11.151 02:04:34 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:11.151 02:04:34 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:11.151 02:04:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:11.408 02:04:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:11.408 02:04:35 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:11.408 02:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.671 02:04:35 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:11.671 02:04:35 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VfNmgmP9j8 00:36:11.671 02:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VfNmgmP9j8 00:36:11.980 02:04:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.7CMbPrcOnU 00:36:11.980 02:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.7CMbPrcOnU 00:36:12.237 02:04:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.237 02:04:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.494 nvme0n1 00:36:12.494 02:04:36 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:12.494 02:04:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:12.753 02:04:36 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:12.753 "subsystems": [ 00:36:12.753 { 00:36:12.753 "subsystem": "keyring", 00:36:12.753 "config": [ 00:36:12.753 { 00:36:12.753 "method": "keyring_file_add_key", 00:36:12.753 "params": { 00:36:12.753 "name": "key0", 00:36:12.753 "path": "/tmp/tmp.VfNmgmP9j8" 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "keyring_file_add_key", 00:36:12.753 "params": { 00:36:12.753 "name": "key1", 00:36:12.753 "path": "/tmp/tmp.7CMbPrcOnU" 00:36:12.753 } 00:36:12.753 } 00:36:12.753 ] 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "subsystem": "iobuf", 00:36:12.753 "config": [ 00:36:12.753 { 00:36:12.753 "method": "iobuf_set_options", 00:36:12.753 "params": { 00:36:12.753 "small_pool_count": 8192, 00:36:12.753 "large_pool_count": 1024, 00:36:12.753 "small_bufsize": 8192, 00:36:12.753 "large_bufsize": 135168 00:36:12.753 } 00:36:12.753 } 00:36:12.753 ] 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "subsystem": "sock", 00:36:12.753 "config": [ 00:36:12.753 { 00:36:12.753 "method": "sock_impl_set_options", 00:36:12.753 "params": { 00:36:12.753 "impl_name": "posix", 00:36:12.753 "recv_buf_size": 2097152, 00:36:12.753 "send_buf_size": 2097152, 00:36:12.753 "enable_recv_pipe": true, 00:36:12.753 "enable_quickack": false, 00:36:12.753 "enable_placement_id": 0, 00:36:12.753 "enable_zerocopy_send_server": true, 00:36:12.753 "enable_zerocopy_send_client": false, 00:36:12.753 "zerocopy_threshold": 0, 00:36:12.753 "tls_version": 0, 00:36:12.753 "enable_ktls": false 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "sock_impl_set_options", 00:36:12.753 "params": { 00:36:12.753 "impl_name": "ssl", 00:36:12.753 "recv_buf_size": 4096, 00:36:12.753 "send_buf_size": 4096, 00:36:12.753 "enable_recv_pipe": true, 00:36:12.753 "enable_quickack": false, 00:36:12.753 "enable_placement_id": 0, 00:36:12.753 "enable_zerocopy_send_server": true, 00:36:12.753 "enable_zerocopy_send_client": false, 00:36:12.753 "zerocopy_threshold": 0, 00:36:12.753 "tls_version": 0, 00:36:12.753 "enable_ktls": false 00:36:12.753 } 00:36:12.753 } 00:36:12.753 ] 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "subsystem": "vmd", 00:36:12.753 "config": [] 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "subsystem": "accel", 00:36:12.753 "config": [ 00:36:12.753 { 00:36:12.753 "method": "accel_set_options", 00:36:12.753 "params": { 00:36:12.753 "small_cache_size": 128, 00:36:12.753 "large_cache_size": 16, 00:36:12.753 "task_count": 2048, 00:36:12.753 "sequence_count": 2048, 00:36:12.753 "buf_count": 2048 00:36:12.753 } 00:36:12.753 } 00:36:12.753 ] 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "subsystem": "bdev", 00:36:12.753 "config": [ 00:36:12.753 { 00:36:12.753 "method": "bdev_set_options", 00:36:12.753 "params": { 00:36:12.753 "bdev_io_pool_size": 65535, 00:36:12.753 "bdev_io_cache_size": 256, 00:36:12.753 "bdev_auto_examine": true, 00:36:12.753 "iobuf_small_cache_size": 128, 00:36:12.753 "iobuf_large_cache_size": 16 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "bdev_raid_set_options", 00:36:12.753 "params": { 00:36:12.753 "process_window_size_kb": 1024 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "bdev_iscsi_set_options", 00:36:12.753 "params": { 00:36:12.753 "timeout_sec": 30 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "bdev_nvme_set_options", 00:36:12.753 "params": { 00:36:12.753 "action_on_timeout": "none", 00:36:12.753 "timeout_us": 0, 00:36:12.753 "timeout_admin_us": 0, 00:36:12.753 "keep_alive_timeout_ms": 10000, 00:36:12.753 "arbitration_burst": 0, 00:36:12.753 "low_priority_weight": 0, 00:36:12.753 "medium_priority_weight": 0, 00:36:12.753 "high_priority_weight": 0, 00:36:12.753 "nvme_adminq_poll_period_us": 10000, 00:36:12.753 "nvme_ioq_poll_period_us": 0, 00:36:12.753 "io_queue_requests": 512, 00:36:12.753 "delay_cmd_submit": true, 00:36:12.753 "transport_retry_count": 4, 00:36:12.753 "bdev_retry_count": 3, 00:36:12.753 "transport_ack_timeout": 0, 00:36:12.753 "ctrlr_loss_timeout_sec": 0, 00:36:12.753 "reconnect_delay_sec": 0, 00:36:12.753 "fast_io_fail_timeout_sec": 0, 00:36:12.753 "disable_auto_failback": false, 00:36:12.753 "generate_uuids": false, 00:36:12.753 "transport_tos": 0, 00:36:12.753 "nvme_error_stat": false, 00:36:12.753 "rdma_srq_size": 0, 00:36:12.753 "io_path_stat": false, 00:36:12.753 "allow_accel_sequence": false, 00:36:12.753 "rdma_max_cq_size": 0, 00:36:12.753 "rdma_cm_event_timeout_ms": 0, 00:36:12.753 "dhchap_digests": [ 00:36:12.753 "sha256", 00:36:12.753 "sha384", 00:36:12.753 "sha512" 00:36:12.753 ], 00:36:12.753 "dhchap_dhgroups": [ 00:36:12.753 "null", 00:36:12.753 "ffdhe2048", 00:36:12.753 "ffdhe3072", 00:36:12.753 "ffdhe4096", 00:36:12.753 "ffdhe6144", 00:36:12.753 "ffdhe8192" 00:36:12.753 ] 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "bdev_nvme_attach_controller", 00:36:12.753 "params": { 00:36:12.753 "name": "nvme0", 00:36:12.753 "trtype": "TCP", 00:36:12.753 "adrfam": "IPv4", 00:36:12.753 "traddr": "127.0.0.1", 00:36:12.753 "trsvcid": "4420", 00:36:12.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.753 "prchk_reftag": false, 00:36:12.753 "prchk_guard": false, 00:36:12.753 "ctrlr_loss_timeout_sec": 0, 00:36:12.753 "reconnect_delay_sec": 0, 00:36:12.753 "fast_io_fail_timeout_sec": 0, 00:36:12.753 "psk": "key0", 00:36:12.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.753 "hdgst": false, 00:36:12.753 "ddgst": false 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "bdev_nvme_set_hotplug", 00:36:12.753 "params": { 00:36:12.753 "period_us": 100000, 00:36:12.753 "enable": false 00:36:12.753 } 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "method": "bdev_wait_for_examine" 00:36:12.753 } 00:36:12.753 ] 00:36:12.753 }, 00:36:12.753 { 00:36:12.753 "subsystem": "nbd", 00:36:12.753 "config": [] 00:36:12.753 } 00:36:12.753 ] 00:36:12.753 }' 00:36:12.753 02:04:36 keyring_file -- keyring/file.sh@114 -- # killprocess 50099 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 50099 ']' 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@951 -- # kill -0 50099 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@952 -- # uname 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 50099 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 50099' 00:36:12.753 killing process with pid 50099 00:36:12.753 02:04:36 keyring_file -- common/autotest_common.sh@966 -- # kill 50099 00:36:12.753 Received shutdown signal, test time was about 1.000000 seconds 00:36:12.753 00:36:12.754 Latency(us) 00:36:12.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.754 =================================================================================================================== 00:36:12.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:12.754 02:04:36 keyring_file -- common/autotest_common.sh@971 -- # wait 50099 00:36:13.012 02:04:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=51479 00:36:13.012 02:04:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 51479 /var/tmp/bperf.sock 00:36:13.012 02:04:36 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 51479 ']' 00:36:13.012 02:04:36 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:13.012 02:04:36 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:13.012 02:04:36 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:13.012 02:04:36 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:13.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:13.012 02:04:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:13.012 "subsystems": [ 00:36:13.012 { 00:36:13.012 "subsystem": "keyring", 00:36:13.012 "config": [ 00:36:13.012 { 00:36:13.012 "method": "keyring_file_add_key", 00:36:13.012 "params": { 00:36:13.012 "name": "key0", 00:36:13.012 "path": "/tmp/tmp.VfNmgmP9j8" 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "keyring_file_add_key", 00:36:13.012 "params": { 00:36:13.012 "name": "key1", 00:36:13.012 "path": "/tmp/tmp.7CMbPrcOnU" 00:36:13.012 } 00:36:13.012 } 00:36:13.012 ] 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "subsystem": "iobuf", 00:36:13.012 "config": [ 00:36:13.012 { 00:36:13.012 "method": "iobuf_set_options", 00:36:13.012 "params": { 00:36:13.012 "small_pool_count": 8192, 00:36:13.012 "large_pool_count": 1024, 00:36:13.012 "small_bufsize": 8192, 00:36:13.012 "large_bufsize": 135168 00:36:13.012 } 00:36:13.012 } 00:36:13.012 ] 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "subsystem": "sock", 00:36:13.012 "config": [ 00:36:13.012 { 00:36:13.012 "method": "sock_impl_set_options", 00:36:13.012 "params": { 00:36:13.012 "impl_name": "posix", 00:36:13.012 "recv_buf_size": 2097152, 00:36:13.012 "send_buf_size": 2097152, 00:36:13.012 "enable_recv_pipe": true, 00:36:13.012 "enable_quickack": false, 00:36:13.012 "enable_placement_id": 0, 00:36:13.012 "enable_zerocopy_send_server": true, 00:36:13.012 "enable_zerocopy_send_client": false, 00:36:13.012 "zerocopy_threshold": 0, 00:36:13.012 "tls_version": 0, 00:36:13.012 "enable_ktls": false 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "sock_impl_set_options", 00:36:13.012 "params": { 00:36:13.012 "impl_name": "ssl", 00:36:13.012 "recv_buf_size": 4096, 00:36:13.012 "send_buf_size": 4096, 00:36:13.012 "enable_recv_pipe": true, 00:36:13.012 "enable_quickack": false, 00:36:13.012 "enable_placement_id": 0, 00:36:13.012 "enable_zerocopy_send_server": true, 00:36:13.012 "enable_zerocopy_send_client": false, 00:36:13.012 "zerocopy_threshold": 0, 00:36:13.012 "tls_version": 0, 00:36:13.012 "enable_ktls": false 00:36:13.012 } 00:36:13.012 } 00:36:13.012 ] 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "subsystem": "vmd", 00:36:13.012 "config": [] 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "subsystem": "accel", 00:36:13.012 "config": [ 00:36:13.012 { 00:36:13.012 "method": "accel_set_options", 00:36:13.012 "params": { 00:36:13.012 "small_cache_size": 128, 00:36:13.012 "large_cache_size": 16, 00:36:13.012 "task_count": 2048, 00:36:13.012 "sequence_count": 2048, 00:36:13.012 "buf_count": 2048 00:36:13.012 } 00:36:13.012 } 00:36:13.012 ] 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "subsystem": "bdev", 00:36:13.012 "config": [ 00:36:13.012 { 00:36:13.012 "method": "bdev_set_options", 00:36:13.012 "params": { 00:36:13.012 "bdev_io_pool_size": 65535, 00:36:13.012 "bdev_io_cache_size": 256, 00:36:13.012 "bdev_auto_examine": true, 00:36:13.012 "iobuf_small_cache_size": 128, 00:36:13.012 "iobuf_large_cache_size": 16 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "bdev_raid_set_options", 00:36:13.012 "params": { 00:36:13.012 "process_window_size_kb": 1024 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "bdev_iscsi_set_options", 00:36:13.012 "params": { 00:36:13.012 "timeout_sec": 30 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "bdev_nvme_set_options", 00:36:13.012 "params": { 00:36:13.012 "action_on_timeout": "none", 00:36:13.012 "timeout_us": 0, 00:36:13.012 "timeout_admin_us": 0, 00:36:13.012 "keep_alive_timeout_ms": 10000, 00:36:13.012 "arbitration_burst": 0, 00:36:13.012 "low_priority_weight": 0, 00:36:13.012 "medium_priority_weight": 0, 00:36:13.012 "high_priority_weight": 0, 00:36:13.012 "nvme_adminq_poll_period_us": 10000, 00:36:13.012 "nvme_ioq_poll_period_us": 0, 00:36:13.012 "io_queue_requests": 512, 00:36:13.012 "delay_cmd_submit": true, 00:36:13.012 "transport_retry_count": 4, 00:36:13.012 "bdev_retry_count": 3, 00:36:13.012 "transport_ack_timeout": 0, 00:36:13.012 "ctrlr_loss_timeout_sec": 0, 00:36:13.012 "reconnect_delay_sec": 0, 00:36:13.012 "fast_io_fail_timeout_sec": 0, 00:36:13.012 "disable_auto_failback": false, 00:36:13.012 "generate_uuids": false, 00:36:13.012 "transport_tos": 0, 00:36:13.012 "nvme_error_stat": false, 00:36:13.012 "rdma_srq_size": 0, 00:36:13.012 "io_path_stat": false, 00:36:13.012 "allow_accel_sequence": false, 00:36:13.012 "rdma_max_cq_size": 0, 00:36:13.012 "rdma_cm_event_timeout_ms": 0, 00:36:13.012 "dhchap_digests": [ 00:36:13.012 "sha256", 00:36:13.012 "sha384", 00:36:13.012 "sha512" 00:36:13.012 ], 00:36:13.012 "dhchap_dhgroups": [ 00:36:13.012 "null", 00:36:13.012 "ffdhe2048", 00:36:13.012 "ffdhe3072", 00:36:13.012 "ffdhe4096", 00:36:13.012 "ffdhe6144", 00:36:13.012 "ffdhe8192" 00:36:13.012 ] 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "bdev_nvme_attach_controller", 00:36:13.012 "params": { 00:36:13.012 "name": "nvme0", 00:36:13.012 "trtype": "TCP", 00:36:13.012 "adrfam": "IPv4", 00:36:13.012 "traddr": "127.0.0.1", 00:36:13.012 "trsvcid": "4420", 00:36:13.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.012 "prchk_reftag": false, 00:36:13.012 "prchk_guard": false, 00:36:13.012 "ctrlr_loss_timeout_sec": 0, 00:36:13.012 "reconnect_delay_sec": 0, 00:36:13.012 "fast_io_fail_timeout_sec": 0, 00:36:13.012 "psk": "key0", 00:36:13.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.012 "hdgst": false, 00:36:13.012 "ddgst": false 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "bdev_nvme_set_hotplug", 00:36:13.012 "params": { 00:36:13.012 "period_us": 100000, 00:36:13.012 "enable": false 00:36:13.012 } 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "method": "bdev_wait_for_examine" 00:36:13.012 } 00:36:13.012 ] 00:36:13.012 }, 00:36:13.012 { 00:36:13.012 "subsystem": "nbd", 00:36:13.012 "config": [] 00:36:13.012 } 00:36:13.012 ] 00:36:13.012 }' 00:36:13.012 02:04:36 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:13.012 02:04:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:13.012 [2024-05-15 02:04:36.776564] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 22.11.4 initialization... 00:36:13.013 [2024-05-15 02:04:36.776657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51479 ] 00:36:13.013 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.013 [2024-05-15 02:04:36.851247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.013 [2024-05-15 02:04:36.938801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.270 [2024-05-15 02:04:37.110818] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:13.834 02:04:37 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:13.834 02:04:37 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:36:13.834 02:04:37 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:13.834 02:04:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.834 02:04:37 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:14.090 02:04:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:14.090 02:04:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:14.090 02:04:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:14.090 02:04:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.090 02:04:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.090 02:04:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.090 02:04:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:14.348 02:04:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:14.348 02:04:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:14.348 02:04:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:14.348 02:04:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.348 02:04:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.348 02:04:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.348 02:04:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:14.605 02:04:38 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:14.605 02:04:38 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:14.605 02:04:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:14.605 02:04:38 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:14.863 02:04:38 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:14.863 02:04:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:14.863 02:04:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VfNmgmP9j8 /tmp/tmp.7CMbPrcOnU 00:36:14.863 02:04:38 keyring_file -- keyring/file.sh@20 -- # killprocess 51479 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 51479 ']' 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@951 -- # kill -0 51479 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@952 -- # uname 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 51479 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 51479' 00:36:14.863 killing process with pid 51479 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@966 -- # kill 51479 00:36:14.863 Received shutdown signal, test time was about 1.000000 seconds 00:36:14.863 00:36:14.863 Latency(us) 00:36:14.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.863 =================================================================================================================== 00:36:14.863 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:14.863 02:04:38 keyring_file -- common/autotest_common.sh@971 -- # wait 51479 00:36:15.120 02:04:38 keyring_file -- keyring/file.sh@21 -- # killprocess 50087 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 50087 ']' 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@951 -- # kill -0 50087 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@952 -- # uname 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 50087 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 50087' 00:36:15.120 killing process with pid 50087 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@966 -- # kill 50087 00:36:15.120 [2024-05-15 02:04:38.980073] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:15.120 [2024-05-15 02:04:38.980135] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:15.120 02:04:38 keyring_file -- common/autotest_common.sh@971 -- # wait 50087 00:36:15.684 00:36:15.684 real 0m13.980s 00:36:15.684 user 0m34.966s 00:36:15.684 sys 0m3.262s 00:36:15.684 02:04:39 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:15.684 02:04:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:15.684 ************************************ 00:36:15.684 END TEST keyring_file 00:36:15.684 ************************************ 00:36:15.684 02:04:39 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:36:15.684 02:04:39 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:15.684 02:04:39 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:36:15.684 02:04:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:15.684 02:04:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:15.684 02:04:39 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:15.684 02:04:39 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:36:15.684 02:04:39 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:36:15.684 02:04:39 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:15.684 02:04:39 -- common/autotest_common.sh@10 -- # set +x 00:36:15.684 02:04:39 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:36:15.684 02:04:39 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:36:15.684 02:04:39 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:36:15.684 02:04:39 -- common/autotest_common.sh@10 -- # set +x 00:36:17.582 INFO: APP EXITING 00:36:17.582 INFO: killing all VMs 00:36:17.582 INFO: killing vhost app 00:36:17.582 INFO: EXIT DONE 00:36:18.955 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:18.955 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:18.955 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:18.955 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:18.955 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:18.955 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:18.955 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:18.955 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:18.955 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:36:18.955 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:18.955 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:18.955 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:18.955 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:18.955 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:18.955 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:18.955 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:18.955 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:20.327 Cleaning 00:36:20.327 Removing: /var/run/dpdk/spdk0/config 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:20.327 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:20.327 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:20.327 Removing: /var/run/dpdk/spdk1/config 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:20.327 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:20.327 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:20.327 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:20.327 Removing: /var/run/dpdk/spdk2/config 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:20.327 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:20.327 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:20.327 Removing: /var/run/dpdk/spdk3/config 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:20.327 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:20.327 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:20.327 Removing: /var/run/dpdk/spdk4/config 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:20.327 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:20.327 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:20.327 Removing: /dev/shm/bdev_svc_trace.1 00:36:20.327 Removing: /dev/shm/nvmf_trace.0 00:36:20.327 Removing: /dev/shm/spdk_tgt_trace.pid3911198 00:36:20.327 Removing: /var/run/dpdk/spdk0 00:36:20.327 Removing: /var/run/dpdk/spdk1 00:36:20.327 Removing: /var/run/dpdk/spdk2 00:36:20.327 Removing: /var/run/dpdk/spdk3 00:36:20.327 Removing: /var/run/dpdk/spdk4 00:36:20.327 Removing: /var/run/dpdk/spdk_pid11514 00:36:20.327 Removing: /var/run/dpdk/spdk_pid11928 00:36:20.327 Removing: /var/run/dpdk/spdk_pid12329 00:36:20.327 Removing: /var/run/dpdk/spdk_pid12738 00:36:20.327 Removing: /var/run/dpdk/spdk_pid13319 00:36:20.327 Removing: /var/run/dpdk/spdk_pid13741 00:36:20.327 Removing: /var/run/dpdk/spdk_pid14244 00:36:20.327 Removing: /var/run/dpdk/spdk_pid14653 00:36:20.327 Removing: /var/run/dpdk/spdk_pid17456 00:36:20.327 Removing: /var/run/dpdk/spdk_pid17593 00:36:20.327 Removing: /var/run/dpdk/spdk_pid21672 00:36:20.327 Removing: /var/run/dpdk/spdk_pid21841 00:36:20.327 Removing: /var/run/dpdk/spdk_pid23445 00:36:20.327 Removing: /var/run/dpdk/spdk_pid28733 00:36:20.327 Removing: /var/run/dpdk/spdk_pid28764 00:36:20.327 Removing: /var/run/dpdk/spdk_pid32047 00:36:20.327 Removing: /var/run/dpdk/spdk_pid33451 00:36:20.327 Removing: /var/run/dpdk/spdk_pid34958 00:36:20.327 Removing: /var/run/dpdk/spdk_pid36322 00:36:20.327 Removing: /var/run/dpdk/spdk_pid37729 00:36:20.327 Removing: /var/run/dpdk/spdk_pid38486 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3909147 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3910382 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3911198 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3911637 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3912323 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3912459 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3913179 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3913194 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3913435 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3914628 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3915536 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3915723 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3916031 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3916233 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3916424 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3916584 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3916738 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3916924 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3917502 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3919853 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3920018 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3920192 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3920216 00:36:20.327 Removing: /var/run/dpdk/spdk_pid3920632 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3920635 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3921068 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3921073 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3921366 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3921372 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3921546 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3921672 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3922041 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3922193 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3922390 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3922554 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3922704 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3922768 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3923042 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3923203 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3923364 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3923515 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3923789 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3923949 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3924103 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3924375 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3924530 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3924695 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3924848 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3925120 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3925277 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3925442 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3925697 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3925867 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3926032 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3926191 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3926461 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3926618 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3926807 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3927016 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3929376 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3984769 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3987671 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3994917 00:36:20.585 Removing: /var/run/dpdk/spdk_pid3998501 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4001238 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4001651 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4009971 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4010044 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4010630 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4011286 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4011829 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4012228 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4012346 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4012488 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4012620 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4012631 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4013293 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4013834 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4014486 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4014891 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4014898 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4015154 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4016030 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4016751 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4022394 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4022665 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4025463 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4029450 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4031613 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4039192 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4045087 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4046273 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4046938 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4058125 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4060632 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4084561 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4087752 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4088929 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4090129 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4090261 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4090402 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4090424 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4090855 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4092171 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4092775 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4093194 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4095419 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4095732 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4096297 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4099100 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4102652 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4106202 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4130955 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4133480 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4137647 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4138594 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4139614 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4142496 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4145036 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4149938 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4149949 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4153013 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4153240 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4153389 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4153651 00:36:20.585 Removing: /var/run/dpdk/spdk_pid4153657 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4154729 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4155909 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4157086 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4158261 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4159909 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4161304 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4165188 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4165562 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4166657 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4167248 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4170989 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4172956 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4176651 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4180262 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4186763 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4191599 00:36:20.843 Removing: /var/run/dpdk/spdk_pid4191601 00:36:20.843 Removing: /var/run/dpdk/spdk_pid44353 00:36:20.843 Removing: /var/run/dpdk/spdk_pid44747 00:36:20.843 Removing: /var/run/dpdk/spdk_pid45135 00:36:20.843 Removing: /var/run/dpdk/spdk_pid46661 00:36:20.843 Removing: /var/run/dpdk/spdk_pid47060 00:36:20.843 Removing: /var/run/dpdk/spdk_pid47456 00:36:20.843 Removing: /var/run/dpdk/spdk_pid50087 00:36:20.843 Removing: /var/run/dpdk/spdk_pid50099 00:36:20.843 Removing: /var/run/dpdk/spdk_pid51479 00:36:20.843 Clean 00:36:20.843 02:04:44 -- common/autotest_common.sh@1448 -- # return 0 00:36:20.843 02:04:44 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:36:20.843 02:04:44 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:20.843 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:36:20.843 02:04:44 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:36:20.843 02:04:44 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:20.843 02:04:44 -- common/autotest_common.sh@10 -- # set +x 00:36:20.843 02:04:44 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:20.843 02:04:44 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:20.843 02:04:44 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:20.843 02:04:44 -- spdk/autotest.sh@387 -- # hash lcov 00:36:20.843 02:04:44 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:20.843 02:04:44 -- spdk/autotest.sh@389 -- # hostname 00:36:20.843 02:04:44 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:21.100 geninfo: WARNING: invalid characters removed from testname! 00:36:53.155 02:05:12 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.155 02:05:16 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:55.679 02:05:19 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:59.010 02:05:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:01.532 02:05:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:04.804 02:05:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:07.326 02:05:30 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:07.326 02:05:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.326 02:05:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:07.326 02:05:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.326 02:05:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.326 02:05:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.326 02:05:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.326 02:05:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.326 02:05:31 -- paths/export.sh@5 -- $ export PATH 00:37:07.326 02:05:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.326 02:05:31 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:07.326 02:05:31 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:07.326 02:05:31 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715731531.XXXXXX 00:37:07.326 02:05:31 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715731531.q3i8Z7 00:37:07.326 02:05:31 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:07.326 02:05:31 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:37:07.326 02:05:31 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:07.326 02:05:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:07.326 02:05:31 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:07.326 02:05:31 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:07.326 02:05:31 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:07.326 02:05:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:07.326 02:05:31 -- common/autotest_common.sh@10 -- $ set +x 00:37:07.326 02:05:31 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:07.326 02:05:31 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:07.326 02:05:31 -- pm/common@17 -- $ local monitor 00:37:07.326 02:05:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.326 02:05:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.326 02:05:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.326 02:05:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:07.326 02:05:31 -- pm/common@21 -- $ date +%s 00:37:07.326 02:05:31 -- pm/common@25 -- $ sleep 1 00:37:07.326 02:05:31 -- pm/common@21 -- $ date +%s 00:37:07.326 02:05:31 -- pm/common@21 -- $ date +%s 00:37:07.326 02:05:31 -- pm/common@21 -- $ date +%s 00:37:07.326 02:05:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715731531 00:37:07.326 02:05:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715731531 00:37:07.326 02:05:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715731531 00:37:07.326 02:05:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715731531 00:37:07.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715731531_collect-vmstat.pm.log 00:37:07.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715731531_collect-cpu-temp.pm.log 00:37:07.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715731531_collect-cpu-load.pm.log 00:37:07.326 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715731531_collect-bmc-pm.bmc.pm.log 00:37:08.258 02:05:32 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:08.258 02:05:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:08.258 02:05:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.258 02:05:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:08.258 02:05:32 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:08.258 02:05:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:08.258 02:05:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:08.258 02:05:32 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:08.258 02:05:32 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:08.258 02:05:32 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:08.258 02:05:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:08.258 02:05:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:08.258 02:05:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:08.258 02:05:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:08.258 02:05:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.258 02:05:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:08.258 02:05:32 -- pm/common@44 -- $ pid=62675 00:37:08.258 02:05:32 -- pm/common@50 -- $ kill -TERM 62675 00:37:08.258 02:05:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.258 02:05:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:08.258 02:05:32 -- pm/common@44 -- $ pid=62676 00:37:08.258 02:05:32 -- pm/common@50 -- $ kill -TERM 62676 00:37:08.258 02:05:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.258 02:05:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:08.258 02:05:32 -- pm/common@44 -- $ pid=62678 00:37:08.258 02:05:32 -- pm/common@50 -- $ kill -TERM 62678 00:37:08.258 02:05:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:08.258 02:05:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:08.258 02:05:32 -- pm/common@44 -- $ pid=62715 00:37:08.258 02:05:32 -- pm/common@50 -- $ sudo -E kill -TERM 62715 00:37:08.258 + [[ -n 3802767 ]] 00:37:08.258 + sudo kill 3802767 00:37:08.267 [Pipeline] } 00:37:08.284 [Pipeline] // stage 00:37:08.290 [Pipeline] } 00:37:08.307 [Pipeline] // timeout 00:37:08.313 [Pipeline] } 00:37:08.329 [Pipeline] // catchError 00:37:08.335 [Pipeline] } 00:37:08.352 [Pipeline] // wrap 00:37:08.358 [Pipeline] } 00:37:08.374 [Pipeline] // catchError 00:37:08.383 [Pipeline] stage 00:37:08.385 [Pipeline] { (Epilogue) 00:37:08.398 [Pipeline] catchError 00:37:08.400 [Pipeline] { 00:37:08.414 [Pipeline] echo 00:37:08.415 Cleanup processes 00:37:08.421 [Pipeline] sh 00:37:08.699 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.699 62849 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:08.699 62941 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.710 [Pipeline] sh 00:37:08.983 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.983 ++ awk '{print $1}' 00:37:08.983 ++ grep -v 'sudo pgrep' 00:37:08.983 + sudo kill -9 62849 00:37:08.994 [Pipeline] sh 00:37:09.271 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:19.243 [Pipeline] sh 00:37:19.535 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:19.535 Artifacts sizes are good 00:37:19.545 [Pipeline] archiveArtifacts 00:37:19.551 Archiving artifacts 00:37:19.754 [Pipeline] sh 00:37:20.026 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:20.037 [Pipeline] cleanWs 00:37:20.044 [WS-CLEANUP] Deleting project workspace... 00:37:20.044 [WS-CLEANUP] Deferred wipeout is used... 00:37:20.049 [WS-CLEANUP] done 00:37:20.051 [Pipeline] } 00:37:20.069 [Pipeline] // catchError 00:37:20.077 [Pipeline] sh 00:37:20.348 + logger -p user.info -t JENKINS-CI 00:37:20.356 [Pipeline] } 00:37:20.372 [Pipeline] // stage 00:37:20.378 [Pipeline] } 00:37:20.392 [Pipeline] // node 00:37:20.396 [Pipeline] End of Pipeline 00:37:20.427 Finished: SUCCESS